query
stringlengths 273
149k
| pos
stringlengths 18
667
| idx
int64 0
1.99k
| task_name
stringclasses 1
value |
---|---|---|---|
We present a data driven approach to construct a library of feedback motion primitives for non-holonomic vehicles that guarantees bounded error in following arbitrarily long trajectories. This ensures that motion re-planning can be avoided as long as disturbances to the vehicle remain within a certain bound and also potentially when the obstacles are displaced within a certain bound. The library is constructed along local abstractions of the dynamics that enables addition of new motion primitives through abstraction refinement. We provide sufficient conditions for construction of such robust motion primitives for a large class of nonlinear dynamics, including commonly used models, such as the standard Reeds-Shepp model. The algorithm is applied for motion planning and control of a rover with slipping without its prior modelling. Various state-the-art motion planning approaches for carlike vehicles use the bicycle model to generate feasible trajectories for high level planning BID3. The model is either discretized in lattice based methods or used as a heuristic for measuring distance between two states in sampling based methods such as rapidly exploring random trees (RRT) BID1. It is then up to the low level feedback controllers of the vehicle to follow the prescribed trajectory; an overview of this group of approaches can be found in Paden et al. BID2. This might prove a challenge in cases where the bicycle model does not resemble the actual vehicle dynamics closely enough; this may in growing error between the prescribed trajectory and vehicles position which in turn may require trajectory re-planning BID3. Recently, approaches such as Howard et al. BID0 and Schwarting et al. BID4 have been proposed that can incorporate the vehicle dynamics in planning to ensure collision avoidance by using model predictive control. While model predictive control can provide feasible trajectories for a large class of nonlinear models, it becomes prohibitively complex for long prediction horizons and may fall into local optima for short prediction horizons in non-convex problem settings BID5.In this work we follow the input discretization approach similar to lattice based methods for motion planning. Instead of relying on a model, we sample from the input space similar to Howard et al. BID0. The main contribution in this work is that we construct locally linear abstractions of the system around samples in the input space and design local feedback rules to ensure fixed upper bound on state error after applying any motion primitive considering both linearization error and initial state error. Therefore, we can guarantee bounded state error through application of the motion primitives at all times. The idea of feedback based motion primitives has also been presented in Vukosavljev et al. BID6 for multi-agent drones with omni-directional controllability; the main contrast here is that we provide a tool for construction of such motion primitives for non-holonomic vehicles. We pose an assumption we refer to as robustifiability in order to be able to synthesize such motion primitives. Consider a vehicle whose dynamics is governed by the following discrete-time nonlinear system: DISPLAYFORM0 with x(t) ∈ X ⊆ R n as system state and u(t) ∈ U ⊂ R m as system input and W as a bounded disturbance. Let X and U be compact sets. The operator ⊕ is used to denote Minkowski sum of two sets: DISPLAYFORM1 Let us be given a starting position of the agent x 0, a free sub-space F ⊆ X, and a goal region X g ⊂ F. We define the problem as follows:Problem 1: Given an agent with translation invariant dynamics, Find a sequence of state subsets (X, X,..., X(T)) and a corresponding sequence of motion primitives, i.e. control strategy (U 1 ,U 2 ,...,U T ) such that: DISPLAYFORM2 III. APPROACH Our approach builds on defining each motion primitive as a composition of a constant input and a feedback control term. We start by defining coarse motion primitives by splitting the input space using a coarse grid. We take each grid cell center as the constant input for the respective motion primitive. We design a feedback control law around each center such that there exists a bound ε and a number of time steps k with the property that if the state uncertainty at time step t < T − k is less than ε, the state uncertainty at t + k will also be less than ε. By state uncertainty being less than ε we mean that the set of states, where the system can be at time t + k fits inside an ε-ball. We have proved that under certain assumptions, such feedback control and bound ε can be found. The assumptions are three-fold: (i) function f is twice differentiable, (ii) its Hessian is element-wise bounded, and (iii) it is so-called robustifiable, which is defined as follows:Definition 1: f is said to be robustifiable on X × U in k steps if and only if for all x ∈ int(X) and u 1,..., u n ∈ int(U) the robustifiability matrix [DISPLAYFORM3] is full rank, where f k is a multi-step extension of dynamics f. Note that being full rank is equivalent to not having any singular value equal to zero, as a we can associate a well conditioned robustifiability matrix with good robustifiability, i.e. possibility to steer the state in any arbitrary direction. An example of a robustifiable system is the Reeds-Shepp model as can be seen in FIG1. It is robustifiable even when controlled only through the steering angle, but it is much better conditioned when controlled through both steering angle and velocity which is also intuitive. For a linear system f (x, u) = Ax + Bu robustifiability is equivallent to controllability: We have DISPLAYFORM4 Having ensured bounded state uncertainty, we will now attempt to find a sequence of motion primitives satisfying Problem 1. We translate the problem into a planning problem on a discrete graph, where vertices represent centers of ε-balls that are entirely in the free space, and edges are defined by motion primitives driving the system from one center of an ε-ball to another. The feedback control term ensures that regardless of where within the former ε-ball the system is, it will end up within the latter ε-ball. The discrete planning problem can then be addressed e.g., via A*. If a satisfying plan cannot be found, we compute an over-approximation of the reachable set to determine, whether the plan does not exist for the original system, or whether the grid was not fine enough to prove or disprove existence of such plan. In the latter case, we refine the grid on the input space, and repeat the procedure. The algorithm is asymptotically complete for deterministic systems. The size of the graph treated by A* grows exponentially with the number of refinements. The construction of motion primitives does not require a model of the dynamics. Only through input sampling, and under the above stated bounded Hessian and robustifiability assumptions, it is possible to construct motion primitives that guarantee bounded state uncertainty at any point in time. Furthermore, for an environment with a single convex moving obstacle or in cases where obstacles can be considered one at a time based on their proximity to the vehicle, it is straightforward to extend the feedback strategy of motion primitives using the vehicle's relative position to the obstacle rather than its absolute position, having the obstacle's motion rate in place of the bounded disturbance. In this case however, the end state of the vehicle may not converge to the goal set as the obstacle moves and if the obstacle is close to the goal set. In general the reach-avoid problem is non-convex and as a cannot be addressed only through continuous feedback, but in many practical cases it may help avoid re-planning. We tested our approach on an Erle-Rover Unmanned Ground Vehicle (UGV) in a room with motion capture for positioning and slippery floor. The algorithm is run on MAT-LAB communicating with the rover through ROS. The system model is derived through input sampling and is shown for one of the motion primitives in FIG0. | We show that under some assumptions on vehicle dynamics and environment uncertainty it is possible to automatically synthesize motion primitives that do not accumulate error over time. | 1,200 | scitldr |
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study. Deep Convolutional Networks (DCNs) have enabled deep learning to become one the primary tools for computer vision tasks. However, adversarial examples-slightly altered inputs that change the model's output-have raised concerns on their reliability and security. Adversarial perturbations can be defined as the noise patterns added to natural inputs to generate adversarial examples. Some of these perturbations are universal, i.e. the same pattern can be used to fool the classifier on a large fraction of the tested dataset (; BID4 . As shown in FIG1, it is interesting to observe that such Universal Adversarial Perturbations (UAPs) for DCNs contain structure in their noise patterns. Results from BID1 together with our here suggest that DCNs are sensitive to procedural noise perturbations, and more specifically here to Gabor noise. Existing UAPs have some visual similarities with Gabor noise as in FIG2. Convolutional layers induce a prior on DCNs to learn local spatial information BID2, and DCNs trained on natural image datasets, such as ImageNet, learn convolution filters that are similar UAPs generated for VGG-19 targeting specific layers using singular vector method BID4. BID10 and decreasing frequency from left to right. in appearance to Gabor kernels and colour blobs BID15 BID11. Gabor noise is a convolution between a Gabor kernel 2 and a sparse white noise. Thus, we hypothesize that DCNs are sensitive to Gabor noise, as it exploits specific features learned by the convolutional filters. In this paper we demonstrate the sensitivity of 3 different DCN architectures (Inception v3,, to Gabor noise on the ImageNet image classification task. We empirically observed that even random Gabor noise patterns can be effective to generate UAPs. Understanding this behaviour is important, as the generation and injection of Gabor noise is computationally inexpensive and, therefore, can become a threat to the security and reliability of DCNs. Compared to standard adversarial examples, UAPs reveal more general features that the DCN is sensitive to. In contrast, adversarial perturbations generated for specific inputs, though less detectable in many cases, can "overfit" and evade only on inputs they were generated for BID16 . Previous approaches to generate UAPs use knowledge of the model's learned parameters. BID8 use the DeepFool algorithm BID7 iteratively over a set of images to construct a UAP. A different approach is proposed in BID9, where UAPs are computed using Generative Adversarial Nets (GANs). BID4 proposed the singular vector method to generate UAPs targeting specific layers of DCNs, learning a perturbation s that maximises the L p -norm of the differences in the activations for that specific layer, f i: DISPLAYFORM0 where the L q -norm of s is constrained to ε. This can approximated using the Jacobian for that layer: DISPLAYFORM1 The solution s that maximizes this is the (p, q)-singular vector can be computed with the power method BID0 ). Then, s is effective to generate UAPs targeting a specific layer in the DCN. The solutions obtained with this method for the first layers of DCNs (see FIG1) resemble the Gabor noise patterns shown in FIG2.However none of these works highlight the interesting visual patterns that manifest from these UAPs. In contrast, we show that procedural noise can generate UAPs targeting DCNs in a systematic and efficient way. Gabor noise is the convolution of a sparse white noise and a Gabor kernel, making it a type of Sparse Convolution Noise BID5 BID6. The Gabor kernel g with parameters {κ, σ, λ, ω} is the product of a circular Gaussian and a harmonic function DISPLAYFORM0 where κ and σ are the magnitude and width of the Gaussian, and λ and ω are the frequency and orientation of the Harmonic BID6. The value of the Gabor noise at point (x, y) is given by DISPLAYFORM1 where (x i, y i) are the coordinates of sparse random points and w i are random weights. Gabor noise is an expressive noise function and has exponentially many parameterizations to explore. To simplify the analysis, we choose anisotropic Gabor noise, where the Gabor kernel parameters and weights are the same for each i. This in noise patterns that have uniform orientation and thickness. We also normalize the variance spectrum of the Gabor noise using the algorithm in BID10 ) to achieve min-max oscillations within the pattern. For our experiments we use the validation set from the ILSVRC2012 ImageNet image classification task BID12 with 1,000 distinct categories. We use 3 pre-trained ImageNet DCN architectures from keras.applications: Inception v3 BID14, ResNet-50 BID3, and VGG-19 BID13.Inception v3 take input images with dimensions 299×299× 3 while the other two networks take images with dimensions 224 × 224 × 3. The kernel size κ = 23 is fixed so that the Gabor kernels will fill the entire image regardless of the distribution of points. The number of points i distributed will be proportional to the image dimensions, which is independent of the Gabor kernel parameters. The ing Gabor noise parameters we control are Θ = {σ, ω, λ}. We test the sensitivity of the models with 1,000 random Gabor noise perturbations generated from uniformly drawn parameters Θ with σ, λ ∈ [1.5, 9] and ω ∈ [0, π].We evaluate our Gabor noise on 5,000 random images from the validation set with an ∞ norm constraint of ε = 12 on the noise. The choice of 12 256 ≈ 0.047 is consistent with other attacks on ImageNet-scale models with less than 5% perturbation magnitude. To provide a baseline, we also measure the sensitivity of the models to 1,000 uniform random noise perturbations from {−ε, ε} D×D×3 where D is the image's side length. This is useful for showing that the sensitivity to Gabor noise is not trivial. Given model output f, input x ∈ X, perturbation s, and small ε > 0, we define the universal sensitivity of a model on perturbation s over X as DISPLAYFORM0 The norm constraint on s ensures that the perturbation is small. For this paper, we choose ∞-norm as it is straightforward to impose for Gabor noise perturbations and is often used in the adversarial machine learning literature. For classification tasks, it is also useful to consider the universal evasion rate of a perturbation s over X |{x ∈ X : arg max f (x) = arg max f (x + s)}| |X|.This corresponds to the definition that an adversarial perturbation is a small change that alters the predicted output label. Note that we are not interested in the ground truth labels for x or x + s. We focus instead on how small changes to the input in large changes to the model's original predictions. It is worth using both the universal sensitivity and the universal evasion metrics, as the former gives a continuous measure of the sensitivity, while the latter tells us on how much of the dataset that perturbation changes the decision of the model. Our show that the order from least to most sensitive models are Inception v3, ResNet-50, and then VGG-19. This is not surprising as the validation accuracies of these models also appear in the same order. Overall, our experiments show that the three models are significantly more sensitive to the Gabor noise than random noise. The universal sensitivity and evasion rates of random noise have very small variance and their values are clustered around their medians. TAB0 shows how close the quartiles of random noise's are for VGG-19.Inception v3 is also insensitive to random noise, but has a moderate sensitivity to Gabor noise. ResNet-50 appears to be more sensitive to the random noise than VGG-19, but VGG-19 is more sensitive to Gabor noise than ResNet-50. This implies that when comparing models higher sensitivity to one type of perturbation does not imply the same relationship for another type of perturbation. The in FIG3 suggest that across the three models a random Gabor noise is likely to affect the model outputs on a third or more of the input dataset. From the histograms, the Gabor noise perturbations appear to centre around relatively high modes for both metrics. As an example, the first quartile of Gabor noise, as seen in "Best" Parameters. Taking the top 10 perturbations that VGG-19 is most sensitive to, we see that the other two models are also very sensitive to these noise patterns. The ranges of the universal evasion rate for these are 69.7% to 71.4% for VGG-19, 50.7% to 53.4% for ResNet-50, and 37.9% to 39.4% for Inception v3. These values are all above the 3rd quartile for each of these models, showing its generalizability to the other models. In FIG5 we see a strong correlation (≥ 0.74) between the universal sensitivity and evasion rates across models. This further suggests that strong perturbations transfer across these models. We also see a weak correlation between λ and the sensitivity and evasion rates for Inception v3, though there appears to be none between λ and the sensitivity values for ResNet50.The universal evasion rate of the perturbations appears to be insensitive to its Gaussian width σ and orientation ω. However, the sensitivity for small λ < 0.3 appears to fall below the average, suggesting that below a certain value the Gabor noise does not affect the model's decision. Interestingly, λ corresponds to the width or thickness of the bands in the image. Examples of Gabor noise perturbations can be seen in the appendix. Sensitivity of Inputs. The model's sensitivity could vary across the input dataset, meaning that the model's predictions is stable on some inputs while more susceptible to small perturbations on others. To measure this, we look at the sensitivity of single inputs over all perturbations. Given a set of perturbations s ∈ S, we define the average sensitivity of a model on input x over S as DISPLAYFORM0 and the average evasion rate on x over S as |{s ∈ S : arg max f (x) = arg max f (x + s)}| |S|.The bimodal distribution of the average evasion rate in FIG4 shows that for each model there are two large subsets of the data: One that is very sensitive and another that is very insensitive. The remaining data points are somewhat uniformly spread in the middle. Note that for Inception v3, there is a much larger fraction of data points whose prediction is not affected by Gabor perturbations. The distribution for the average sensitivity appears to have similar shape, but with more inputs in the 0-20% range for Inception v3. The dataset is far less sensitive against random noise with upwards of 60% of the dataset being insensitive to that noise across all models. The show that the tested DCN models are sensitive to Gabor noise for a large fraction of the inputs, even when the parameters of the Gabor noise are chosen at random. This hints that it may be representative of patterns learned at the earlier layers as Gabor noise appears visually similar to some UAPs targeting earlier layers in DCNs BID4.This phenomenon has important implications on the security and reliability of DCNs, as it can allow attackers to craft inexpensive black-box attacks. On the defender's side, Gabor noise patterns can also be used to efficiently generate data for adversarial training to improve DCNs robustness. However, both the sensitivity exploited and the potential to mitigate it require a more in-depth understanding of the phenomena at play. In future work, it may be worth analyzing the sensitivity of hidden layer activations across different families of procedural noise patterns and to investigate techniques to reduce the sensitivity of DCNs to perturbations. As seen in Figure 6, sensitivity metric values for random noise fall in a narrow range and are significantly smaller than the metric values of the Gabor noise. This is further shown when comparing the quartiles of the universal evasion and sensitivity in TAB1 Figures 9, 10, 11, 12, and 13 show some adversarial examples with the top perturbations. Large part of the input dataset is insensitive to random noise as shown in TAB3, 6 and Figure 7. With about 60% of the dataset on having near 0% average evasion over the random noise perturbations for all three models. | Existing Deep Convolutional Networks in image classification tasks are sensitive to Gabor noise patterns, i.e. small structured changes to the input cause large changes to the output. | 1,201 | scitldr |
Deep neural networks (DNNs) perform well on a variety of tasks despite the fact that most used in practice are vastly overparametrized and even capable of perfectly fitting randomly labeled data. Recent evidence suggests that developing "compressible" representations is key for adjusting the complexity of overparametrized networks to the task at hand and avoiding overfitting . In this paper, we provide new empirical evidence that supports this hypothesis, identifying two independent mechanisms that emerge when the network’s width is increased: robustness (having units that can be removed without affecting accuracy) and redundancy (having units with similar activity). In a series of experiments with AlexNet, ResNet and Inception networks in the CIFAR-10 and ImageNet datasets, and also using shallow networks with synthetic data, we show that DNNs consistently increase either their robustness, their redundancy, or both at greater widths for a comprehensive set of hyperparameters. These suggest that networks in the deep learning regime adjust their effective capacity by developing either robustness or redundancy. Deep neural networks (DNNs) are capable of successfully learning from examples in a wide variety of tasks. Though these networks are typically trained with large amounts of data, the number of free parameters in their architectures is often several orders of magnitude greater than the number of training examples. This overparametrization reflects the ability of DNNs to memorize entire datasets, even with randomized labels. Additionally, large networks not only tend to match the performance of small ones, but often generalize better (e.g. Neyshabur et al. (2017b);;; ). Figure 1 demonstrates this for a variety of modern networks trained in ImageNet and CIFAR-10. These observations raise the question of how vastly overparametrized networks can perform well in structured tasks without overfitting. While DNNs appear to adapt their capacity to the complexity of the given task, precisely what causes them to do so remains an open question. Several previous studies have aimed to uncover why, out of the many optima an overparametrized network can reach to achieve 100% training accuracy, they tend toward ones that generalize well (b; ; ;) often by proving generalization bounds for simple models related to weight matrix norms or Rademacher complexity (; a; ;). , showed that, in certain networks, the crucial computations were performed by sparse subnetworks within them. In doing so, they suggested that large networks tend to perform as well as or better than smaller ones because they more reliably contained fortuitously-initialized "lottery ticket" subnetworks. Here, we focus on the question of why generalization ability does not decrease as a network's degree of overparametrization increases. We investigate two critical properties of DNNs: robustness (how fragile the network is to removal of units) and redundancy (how similar unit activity is). In doing so, we build off of theoretical work by and , connecting the compressibility of DNNs to their non-overfitting behavior. We find that various DNNs train toward regimes with different degrees of robustness and redundancy, but that at least one of the two properties, if not both, consistently emerges as a model's size is increased. Based on these , we offer interpretations of the various ways in which DNNs may constrain their effective capacity to protect from overfitting. It has been observed that the increase of overparametrization without loss of generality for DNN models relates to adjustments of the models complexity, given the task at hand (e.g. ;). Observations such as Figure 2a,b show empirically that certain large networks are more robust to the ablation (dropout) of a fixed proportion of units than small ones which might suggest that these large models are adjusting their complexity to the task at hand. However, Figure 2c shows that this robustness can be dependent on network initialization, and to our knowledge, the mechanisms responsible for these observations have not yet been the subject of empirical investigation. Given an increased level of overparametrization, we contemplate six capacity-constraining features which could prevent a network from overfitting. (We cannot a priori exclude the interplay of some of these aspects.) (i) Redundant functional units: units whose activations can be expressed linearly in terms of other units and which affect the output of the network. (ii) Redundant blocked units: units as above but which do not affect output. (iii) Nonredundant blocked units: units with noncorrelated activity which do not propagate to affect output. (iv) Silent units: units that are sparsely activated across datapoints. (v) Constant units: units that are consistently activated with low variance across datapoints. (vi) Semantically redundant units: units which are not linearly correlated to others and which provide a different representation from the previous one, similarly related to the learning task. In this work, we provide measures to aid in distinguishing which of these features that large models may be developing in order to constrain their capacity. Consider a layer with n units inside a DNN and a dataset, with m samples. We define robustness as the ability of the network to output consistent labels unperturbed and when unital ablations (dropouts) are applied to a certain proportion of its units. We quantify robustness by applying ablations to units during evaluation as done by . We randomly apply these ablations to different proportions of units in our networks to obtain that network's ablation curve as reported in Figure 2. We use the area under these ablation curves as a metric for robustness. This measure should aid in distinguishing features (i, vi) where we expect robustness to increase as n grows larger from features (ii, iii, iv) where we expect it to remain unchanged. We define redundancy as a property of a model which can be quantified through two metrics, compressibility and similarity. Given a layer we measure its compressibility by quantifying the amount of principal components needed to explain 95% of the activational variance in the n units. We also introduce similarity as a measure of a special type of redundancy in which units are highly correlated. We determine two units in the same layer to be similar if their absolute valued Pearson correlation coefficient exceeds a certain threshold (here, we use 0.5). Figure B2 shows the distributions of these coefficients for several AlexNet models. The use of both compressibility and similarity should discriminate features (i, ii), for which we expect an increase of both measures as n increases, from features (iv, v) for which we expect increasing compressibility and stable similarity (because Pearson correlation normalizes variance). Redundancy and robustness do not imply each other. To show this, let S and L denote respectively a small and large deep network, both trained on the dataset D. Suppose that the generalization ability of L is equal to or greater than that of S. We provide an example for each of four possible scenarios. L is more robust and redundant than S: This scenario could be constructed using redundant functional neurons (i). Given S, we could duplicate each layer with outgoing weights w in S, except for the output layer, so that the new layers were [,] composites with [L is equally or less robust and more redundant than S: This scenario could from a "weight balancing" effect. As in the previous case, we could duplicate the neurons in each layer, but also add a large, opposite-signed constant η to the halved output weights of each unit and its duplicate so that the new layers were [,] composites with [w 2 + η, w 2 − η] outgoing weights. As η grows larger, outputs become less robust to ablation. Such an L would be more redundant, as each unit has a corresponding redundant twin (i), but at the same time less robust as the pairs need to balance each other for stability. A second way this scenario could is from the addition of silent units (iv) which would tend make activation vectors more compressible (though not more similar), but would not make the network more robust to the ablation of a fixed proportion of units. L is more robust and equally or less redundant than S: This can occur in certain cases in which the two models learn qualitatively different representations by forming units that are merely semantically redundant (vi). As an example, suppose that S learned holistic class representations but that L had the capacity to learn more complex bag-of-features class representations. If so, L would be more robust because of its bag-like representations without necessarily being more redundant. L is equally or less robust and redundant than S: Given S, such an L could be created by adding nonredundant blocked units (iii) to S. This operation does not increase redundancy, as the units are not linearly correlated, neither robustness as the smaller proportion of essential units in L would be offset by an increase in the number of units dropped out in with the ablation of a fixed proportion of units. The robustness-redundancy hypothesis: Consider a small deep network S and a large one L, both with the same architecture, trained with the same explicit regularization scheme, each with a tuned set of hyperparameters and a the same random initialization scheme. Based on our findings, our central hypothesis is that alongside L generalizing as well or better than S, L will be more robust and/or more redundant than S due to the effects of autoregularization. Table 1: Network training and performance details: "BN" refers to batch normalization, "DA" refers to data augmentation, "DO" refers to dropout, and "WD" refers to L2 weight decay. " Best" refers to learning rate/batch size combinations that we found to achieve the highest accuracy. Stars indicate factors for which we tested different hyperparameters/variants. To investigate redundancy and robustness across common cases used for machine learning research, we experiment on a variety of different tasks, networks, initializations, sizes, architectures, optimizers, and regularizers. Table 1 gives details for the training and performance of the networks and datasets we use. Features that we tested multiple variants of are marked with a star . Further details for all networks can be found in appendix A. To see how robustness and redundancy vary as functions of a model's degree of overparametrization, we tested variants of each network in which the number of weights/filters in each layer/block/module were multiplied by factors of 1/4, 1/2, 1, 2, and 4. Networks: For experiments with synthetic, uncorrelated data, we used simple multilayer perceptrons (MLPs) with 1 hidden layer of 128 units for the 1x model size and ReLU activations. For experiments using CIFAR-10, we used scaled-down AlexNet models with two convolutional and two fullyconnected layers based on and ResNet56s with initial convolutions and 3 block layers based on. For the ImageNet dataset, we used ResNet18s with 4 block layers also based on as well as Inception-v3 networks based on . Figure 1 shows the testing accuracy achieved by several of our ImageNet and CIFAR-10 networks. For these and all others we test, increasing model size in either equal or improved performance. Due to hardware restrictions, we were not able to train any 2x or 4x sized Inception-v3s and instead experimented with versions of these networks where a single layer's size varied from 1/4x to 4x. Figure A1 plots the number of trainable parameters for each of our networks, showing that they increase exponentially with model size. We used the ImageNet dataset with approximately 1 million training images and 50,000 images for validation and the CIFAR-10 dataset with a 50,000/5,000/10,000 train/validation/test split for our larger-scale experiments. For small-scale ones using MLPs, we used synthetic, uncorrelated data generated by randomly-initialized teacher MLPs with binary output identical in architecture to the 1/4x MLP models that we trained. We verified that our teacher networks output each label for between 40% and 60% of random inputs. We trained and evaluated our MLPs by default on datasets of 1,000 examples. All networks were trained for a fixed number of iterations except for the CIFAR-10 AlexNets which were trained to the epoch of maximum validation accuracy. However, Figure C3 demonstrates that the trends in robustness and redundancy that we analyze are invariant to the amount of training time after convergence in the ResNet18s, and we observe the same for all other models. Initializations: By default, we initialized our MLP and Inception-v3 networks using random normal distributions with mean 0 and a fixed σ. In the MLPs, we experimented with various values for σ. For our AlexNet and ResNet models, we defaulted to using normal Xavier/Glorot initialization with mean 0 and relatively small-variance σ 2 = 2/(f an_in + f an_out). However, in the AlexNets, we also experiment with medium-variance LeCun initialization σ 2 = √ 3/f an_in and high-variance He initialization σ 2 = 2/f an_in as well as uniform initialization distributions. Optimizers: We use RMSProp in the Inception-v3s and the momentum optimizer in all other models by default. We also experiment with using momentumless stochastic gradient descent (SGD) and Adam with our MLPs. normalization, data augmentation, and weight decay. To test how explicit regularization contributes to robustness and redundancy, we also analyze AlexNets trained with data augmentation, dropout, weight decay, and all three combined. Learning Rates and Batch Sizes: Typically in DNNs, varying batch size and learning rate jointly by a constant factor tends to affect training time but not performance , and they are commonly varied in practice. To see how this affects robustness and redundancy, we experiment with varying learning rate and batch size jointly by factors of 1/4, 1, and 4. Due to the the number of units in the models and the size of the datasets, fully analyzing all activation vectors for convolutional filters was intractable. Instead, we based our measures of robustness and redundancy on a sampling of units capped at 50,000 per layer. We average across three independent samplings and find that the variance between them is negligible. For each model, we also average across layers, and except for the ResNet18s and Inception-v3s, we also average across three independently-trained replicates. All error bars for robustness and compressibility plots show standard deviation between independently trained model replicates when applicable and independent samplings of units when not. They are typically to small to appear in plots. Those for similarity show average standard deviation within trials. For all experiments, we display only for the test/validation set because from the train set (except for accuracy) were almost identical. Here, we present our findings for how robustness and redundancy vary as a function of model size. We experiment across a wide variety of networks, hyperparameters, and training methods, show that robustness and/or redundancy emerge in large models across cases used throughout modern machine learning research. Robustness emerges and redundancy varies in modern ImageNet models. Figure 3 shows that the ResNet18 and Inception-v3 models both become more robust as their model size increases. However, they demonstrate the independence of redundancy and robustness with the ResNet18s developing more compressibility and much more similarity, and the Inception-v3s losing compressibility with size. We take this discrepancy between robustness and redundancy trends as strong evidence that these models, particularly the Inception-v3s, are autoregularizing largely by forming qualitatively different representations at different sizes with units that are merely semantically redundant at large sizes. We also observe that compressibility and similarity do not predict each other particularly well, especially in the case of the ResNet18s, potentially due to different proportions of redundant and silent or constant units in these networks. Robustness and redundancy develop in networks trained on randomly-labeled data. Our central question revolves around how models are able to constrain their effective capacity to the task at hand, so it is natural to study robustness and redundancy in networks trained to memorize randomly Even when fitting randomly labeled data, these models develop significant levels of redundancy and/or robustness at large sizes. Curiously, in the ResNet56 models, fitting random labels caused robustness to flatline but redundancy to increase relative to the models trained on correctly-labeled data. These trends strongly indicate that robustness and redundancy do not predict generalization capability. Robustness and redundancy are sensitive to initialization variance. Some recent work has suggested that network initialization has a large influence over generalization behavior (; ;). To see what effects it may have on robustness and redundancy, we test initializations with differing levels of variance. Figure 5 presents for AlexNets trained with high-variance He, medium-variance LeCun, and low-variance Glorot initializations. Robustness increases with model size for the Glorot and Lecun-initialized nets but exhibits a fairly flat trend for the He-initialized nets, which demonstrate a case where robustness does not emerge at large size. For the Lecun and He-initialized nets, some model size factor doublings coincide with more than a doubling of the number of similar units per unit, perhaps reflecting a fairly sharp transition to a differently-behaving regime when initialization variance becomes high enough. That fact that the He-initialized nets exhibit a strongly positive trend in redundancy but not robustness might be indicative of redundant blocked units. We also test AlexNets with uniform He, LeCun, and Glorot initializations and find their (Figure C4) to be very similar to those of the normal initialized ones, suggesting that the initialization distribution matters little compared to the initialization variance. Adding to our analysis of initializations, Figure C5 shows the of altering initialization variance in our MLPs trained on uncorrelated data. Those initialized with small variance develop more redundancy at larger model sizes, the opposite trend as in our AlexNets. We also find that the MLPs initialized with very high variance develop neither more redundancy nor robustness. However, Figure C6 shows that the same is not the case for similar MLPs trained on low dimensional data, suggesting that this phenomenon in which neither robustness or redundancy increase is related to high dimensionality in data. We restrict our robustness-redundancy hypothesis only to deep models and show that it applies to state of the art networks, but this case with a single layer MLP, high initialization, and high dimensional, uncorrelated data presents a limitation for our framework. We speculate that these models may be operating in a regime with unique representations or with nonredundant blocked units at large size factors. The robustness-redundancy hypothesis holds under a number of additional tests. Figure C7, Figure C8, Figure C9, and Figure C10 show that while individual layers in our ResNets and Inceptionv3s display unique trends, each develops more redundancy or robustness at higher sizes and generally follows the trend of its network as a whole. There is no consistent relationship between a layer's depth and its robustness or redundancy. Because we speculate that redundancy and robustness in DNNs is a of autoregularization, it is also natural to analyze their trends in networks trained with and without explicit regularization. Figure C11 shows that data augmentation, dropout, weight decay, and all three together change the overall amount of redundancy or robustness developed in the CIFAR-10 AlexNet models, but they did not change the trends. To probe the influence of optimizers, we test training the MLP models using stochastic gradient descent (SGD), momentum, and Adam. Figure C12 shows that varying these optimizers can affect how much redundancy or robustness develop, but they do not change overall trends. Additionally, to see if robustness and redundancy depend on learning rate and batch size, we test the ResNet18, ResNet56, and AlexNet models trained on ImageNet and CIFAR-10 with several experiments in which we vary learning rate and batch size while keeping them proportional. Figure C13 and Figure C14 show that this has little to no effect on outcomes. In this work, we empirically analyze models in terms of their activations (; a; b) which makes our contextual to input data. Because of this, we are able to scale our analysis to state of the art networks like ResNet18 and Inception-v3. And by focusing not on the broad question of generalization, but on the subproblem of why networks do not perform worse when their size is increased, we are able to show that redundancy and robustness are central to how networks autoregularize. A related branch of work has focused on the relationship between a network's compressibility and its generalization behavior Our generally validate both of these approaches, but we show that different networks develop different compressible features and to different extents, so we speculate that both pruning unimportant units and compressing redundant units may be complementary tools for developing new compression algorithms. We also show that redundancy is highly sensitive to a network's initialization while its accuracy is not. This suggests that certain compression techniques could be improved greatly by validating over multiple initializations in order to produce maximally redundant models. We also make progress toward tightening our understanding of how compressible DNNs are which shows can lead to improved practical generalization bounds. suggests that redundancy implies robustness, and Morcos et al. (2018b) connects a network's robustness to the flattening of a layers' activation space along the direction of a single activation vector to improved generalization performance. However, our findings suggest that these trends may not hold for all networks and that redundancy and robustness poorly predict generalization. Our work is also related to who takes a theoretical approach to show that model networks in the overparametrized regime tend to develop weight vectors that align to a set of discrete directions that are determined by the input data. Our work suggest that their may retain a high degree of explanatory power in some but not all state of the art cases. Despite a great deal of recent progress, to our knowledge, ours is the first work to date that has quantitatively studied the connections between overparametrization, robustness, and redundancy together. We analyze these phenomena across a wide range of networks which may aid in understanding how well theoretical findings (which are typically based on simple models) generalize to common networks in machine learning. We find that each network we analyze displays unique trends in robustness, compressibility, and similarity, yet that all deep ones develop more redundancy and/or robustness at large model sizes. We also demonstrate that the two are highly dependent on initializations and that high variance increases redundancy in some networks and decrease it in others. Limitations of our work include that we do not analyze cases with varying network depth and the fact that our single-layer MLPs with large initializations trained with high-dimensional, uncorrelated data do not seem to develop either increased robustness or redundancy at large model sizes. However, a recent strand of research has emerged illluminating similarities between deep networks and kernel machines (; ;) and suggesting that networks with high-variance initializations can operate in a kernel-like regime which we suspect relates to these findings for networks initialized with large variance. In this paper, we jointly analyze the robustness and redundancy of deep neural networks with the aim of understanding why generalization ability does not tend to decrease as a network's degree of overparametrization increases. In doing so, we find that robustness and redundancy do not imply each other but that one or the other or both consistently increase alongside overparametrization. We connect these observations to various capacity-constraining features which DNNs may develop in order to support the connection between compressibility and generalization and to shed light on the features networks may develop to avoid overfitting. In doing so, we paint a more complex picture of robustness and redundancy than much previous work has assumed. By illustrating the relationships between these phenomena, we suggest various new research directions in theory of learning and compression. We believe that together, these findings represent a milestone in understanding the emergent properties of overparametrized neural networks. ResNet18s: These networks were off the shelf from for the ImageNet dataset. They consisted of an initial convolution and batch norm followed by 4 building block (v1) layers, each with 2 blocks and a fully connected layer leading to a softmax output. All kernel sizes in the initial layers and block layers were of size 7 × 7 and stride 2. All activations were ReLU. In the 1x sized model, the convolutions in the initial and block layers used 64, 64, 128, and 256 filters respectively. After Xavier/Glorot initialization, we trained them for 90 epochs with a default batch size of 256 an initial default learning rate of 1 which decayed by a factor of 10 at epochs 30, 60, and 80. Training was done on the ILSVRC 2012 dataset with approximately 1 million images, and evaluation was done on 50,000 validation images. Optimization was done with SGD using 0.9 momentum. We used batch normalization, data augmentation with random cropping and flipping, and 0.0001 weight decay. Inception-v3s: These networks were off the shelf from for the ImageNet dataset. For the sake of brevity, we will omit including the architectural details here. After using a truncated normal initialization with σ = 0.1, we trained them for 90 epochs with a default batch size of 256 and initial default learning rate of 1 with an exponential decay of 4% every 8 epochs. Training was done for 90 epochs on the ILSVRC 2012 dataset with approximately 1 million images, and evaluation was done on 50,000 validation images. Optimization was done with the RMSProp optimizer. We used a weight decay of 0.00004, augmentation with random cropping and flipping, and batch norm with 0.9997 decay on the mean and an epsilon of 0.001 to avoid dividing by zero. Due to hardware constraints, we were not able to train 2x and 4x variants of the network. Instead, we trained the 1/4x-1x sizes along with versions of the network with 1/4x-4x sizes for a the "mixed 2: 35 x 35 x 288" layer only. AlexNets: We use a scaled-down version of the network developed by for the CIFAR10 dataset similar to the one used by. The network consisted of a 5-layer neural network with two convolutional layers, two dense layers and a readout layer. In each convolutional layer, 5 × 5 filters with stride 1 were applied, followed by max-pooling with a 3 × 3 kernel and stride 2. Importantly, local response normalization with a radius of 2, alpha = 2 * 10 −05, beta = 0.75 and bias = 1.0 was applied after each pooling which has an effect of negatively correlating different units. Each layer contained bias terms, and all activations were ReLU. In the 1x sized model, the convolutions used 96 and 256 filters, while the dense layers used 384, and 192 units. We trained these networks on 45,000 images with early stopping based on maximum performance on a 5,000 image validation set. The test set was 10,000 images. Weights were optimized with SGD using 0.9 momentum with an initial learning rate of 0.01, exponentially decaying by 5% every epoch. By default, and unless otherwise stated, we used Xavier/Glorot initialization, a batch size of 128, and no explicit regularizers. ResNet56s: These networks were off the shelf from for the CIFAR-10 dataset. They consisted of an initial convolution and batch norm followed by 3 building block (v1) layers, each with 9 blocks, and a fully connected layer leading to a softmax output. Kernels in the initial layers and block layers were of size 3 × 3 and stride 1. All activations were ReLU. In the 1x sized model, the convolutions in the initial and block layers used 16, 16, 32, 64, and 128 filters respectively. After initializing with Xavier/Glorot initialization, we trained them on 45,000 images for 182 epochs with a default batch size of 128 and an initial default learning rate of 1 which decayed by a factor of 10 at epochs 91 and 136. Testing was done on 10,000 images. Optimization was done with SGD using 0.9 momentum. We used batch normalization, data augmentation with random cropping and flipping (except for our variants trained on randomly labeled data), and 0.0002 weight decay. We use simple multilayer perceptrons with either 10 or 10,000 inputs and binary output. They contained a single hidden layer with 128 units for the 1x model sizes and a bias unit. All hidden units were ReLU activated. Weights were initialized using a normal distribution with default σ of 0.01. Each was trained by default for 50 epochs on 1,000 examples produced by a 1/4x sized teacher network with the same architecture which was verified to produce each output for between 40% and 60% of random inputs. Our criterion for similarity was based on the Pearson correlation r between two units in a layer. We considered two units to be similar if abs(r) was at least 0.5. Figure B2 shows the distribution of all absolute valued correlation coefficients in unregularized, regularized, and random-label-fitting AlexNets in CIFAR-10. In each of these networks, more similar neurons are found in the first convolutional and final fully connected layers, and with the exception of the fully connected layers in the regularized models, the tails of the distributions extend higher at higher model sizes. Figure C11: Explicit regularization affects amounts but not trends for robustness and redundancy in AlexNets (CIFAR-10). Trends in (a) accuracy, (b) robustness, (c) compressibility, and (d) similarity with various explicit regularizers across size factors for AlexNets. We test data augmentation, dropout, weight decay, and all three together against the unregularized control. Each regularizer positively affects generalization performance, though weight decay only has a slightly positive effect. None serve to change the general trends in robustness and redundancy, but they scale the curves up or down. Data augmentation reduces robustness, compressibility, and similarity, possibly because it forces the AlexNets to approximate a more complex function than nonaugmented data does. Dropout and weight decay, however, have a large and small positive effect on robustness, compressibility, and redundancy respectively. We attribute this to dropout forcing networks to develop redundant units to cope with ablations and weight decay pushing the weights of the network toward a smaller subspace. Figure C14: The learning rate and batch size factor only has a slight effect on compressibility in AlexNets and ResNet56s (CIFAR-10). Trends in (a) accuracy, (b) robustness, (c) compressibility, and (d) similarity across model sizes for AlexNets and ResNet56s. We vary a constant factor k from 1/4 to 4 as a multiplier for the batch sizes and learning rates. 4x AlexNets with k = 4 were not trained due to hardware restrictions. Compressibility tends to be slightly higher with higher k. | Probing robustness and redundancy in deep neural networks reveals capacity-constraining features which help to explain non-overfitting. | 1,202 | scitldr |
Deep networks realize complex mappings that are often understood by their locally linear behavior at or around points of interest. For example, we use the derivative of the mapping with respect to its inputs for sensitivity analysis, or to explain (obtain coordinate relevance for) a prediction. One key challenge is that such derivatives are themselves inherently unstable. In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions. While the problem is challenging in general, we focus on networks with piecewise linear activation functions. Our algorithm consists of an inference step that identifies a region around a point where linear approximation is provably stable, and an optimization step to expand such regions. We propose a novel relaxation to scale the algorithm to realistic models. We illustrate our method with residual and recurrent networks on image and sequence datasets. Complex mappings are often characterized by their derivatives at points of interest. Such derivatives with respect to the inputs play key roles across many learning problems, including sensitivity analysis. The associated local linearization is frequently used to obtain explanations for model predictions BID3 BID24 BID28 BID26; explicit first-order local approximations BID22 BID17 BID31; BID1; or used to guide learning through regularization of functional classes controlled by derivatives BID19 BID5 ). We emphasize that the derivatives discussed in this paper are with respect to the input coordinates rather than parameters. The key challenge lies in the fact that derivatives of functions parameterized by deep learning models are not stable in general BID14. State-of-the-art deep learning models are typically over-parametrized BID37, leading to unstable functions as a by-product. The instability is reflected in both the function values BID17 as well as the derivatives BID14 BID0. Due to unstable derivatives, first-order approximations used for explanations therefore also lack robustness BID14 BID0.We note that gradient stability is a notion different from adversarial examples. A stable gradient can be large or small, so long as it remains approximately invariant within a local region. Adversarial examples, on the other hand, are small perturbations of the input that change the predicted output BID17. A large local gradient, whether stable or not in our sense, is likely to contribute to finding an adversarial example. Robust estimation techniques used to protect against adversarial examples (e.g., ) focus on stable function values rather than stable gradients but can nevertheless indirectly impact (potentially help) gradient stability. A direct extension of robust estimation to ensure gradient stability would involve finding maximally distorted derivatives and require access to approximate Hessians of deep networks. In this paper, we focus on deep networks with piecewise linear activations to make the problem tractable. The special structure of this class of networks (functional characteristics) allows us to infer lower bounds on the p margin -the maximum radius of p -norm balls around a point where derivatives are provably stable. In particular, we investigate the special case of p = 2 since the lower bound has an analytical solution, and permits us to formulate a regularization problem to maximize it. The ing objective is, however, rigid and non-smooth, and we further relax the learning problem in a manner resembling (locally) support vector machines (SVM) BID29 BID8.Both the inference and learning problems in our setting require evaluating the gradient of each neuron with respect to the inputs, which poses a significant computational challenge. For piecewise linear networks, given D-dimensional data, we propose a novel perturbation algorithm that collects all the exact gradients by means of forward propagating O(D) carefully crafted samples in parallel without any back-propagation. When the GPU memory cannot fit O(D) samples in one batch, we develop an unbiased approximation to the objective with a random subset of such samples. Empirically, we examine our inference and learning algorithms with fully-connected (FC), residual (ResNet) , and recurrent (RNN) networks on image and time-series datasets with quantitative and qualitative experiments. The main contributions of this work are as follows:• Inference algorithms that identify input regions of neural networks, with piecewise linear activation functions, that are provably stable.• A novel learning criterion that effectively expand regions of provably stable derivatives.• Novel perturbation algorithms that scale computation to high dimensional data.• Empirical evaluation with several types of networks. For tractability reasons, we focus in this paper on neural networks with piecewise linear activation functions, such as ReLU BID15 and its variants (; ; BID2 . Since the nonlinear behavior of deep models is mostly governed by the activation function, a neural network defined with affine transformations and piecewise linear activation functions is inherently piecewise linear . For example, FC, convolutional neural networks (CNN) , RNN, and ResNet are all plausible candidates under our consideration. We will call this kind of networks piecewise linear networks throughout the paper. The proposed approach is based on a mixed integer linear representation of piecewise linear networks, activation pattern BID20, which encodes the active linear piece (integer) of the activation function for each neuron; once an activation pattern is fixed, the network degenerates to a linear model (linear). Thus the feasible set corresponding to an activation pattern in the input space is a natural region where derivatives are provably stable (same linear function). Note the possible degenerate case where neighboring regions (with different activation patterns) nevertheless have the same end-to-end linear coefficients BID23. We call the feasible set induced by an activation pattern BID23 a linear region, and a maximal connected subset of the input space subject to the same derivatives of the network a complete linear region. Activation pattern has been studied in various contexts, such as visualizing neurons BID13, reachability of a specific output value , its connection to vector quantization BID4, counting the number of linear regions of piecewise linear networks BID20 Montúfar, 2017; BID23, and adversarial attacks BID7 BID13 BID32 or defense. Note the distinction between locally linear regions of the functional mapping and decision regions defined by classes BID36; BID9 ).Here we elaborate differences between our work and the two most relevant categories above. In contrast to quantifying the number of linear regions as a measure of complexity, we focus on the local linear regions, and try to expand them via learning. The notion of stability we consider differs from adversarial examples. The methods themselves are also different. Finding the exact adversarial example is in general NP-complete (; BID25, and mixed integer linear programs that compute the exact adversarial example do not scale BID7 BID13 . Layer-wise relaxations of ReLU activations BID32 are more scalable but yield bounds instead exact solutions. Empirically, even relying on relaxations, the defense (learning) methods are still intractable on ImageNet scale images BID10. In contrast, our inference algorithm certifies the exact 2 margin around a point subject to its activation pattern by forwarding O(D) samples in parallel. In a high-dimensional setting, where it is computationally challenging to compute the learning objective, we develop an unbiased estimation by a simple sub-sampling procedure, which scales to ResNet on 299 × 299 × 3 dimensional images in practice. The proposed learning algorithm is based on the inference problem with 2 margins. The derivation is reminiscent of the SVM objective BID29 BID8, but differs in its purpose; while SVM training seeks to maximize the 2 margin between data points and a linear classifier, our approach instead maximizes the 2 margin of linear regions around each data point. Since there is no label information to guide the learning algorithm for each linear region, the objective is unsupervised and more akin to transductive/semi-supervised SVM (TSVM) BID30 BID6. In the literature, the idea of margin is also extended to nonlinear classifiers in terms of decision boundaries BID12. Concurrently, BID9 also leverages the (raw) p margin on small networks for adversarial training. In contrast, we develop a smooth relaxation of the p margin and novel perturbation algorithms, which scale the computation to realistic networks, for gradient stability. The problem we tackle has implications for interpretability and transparency of complex models. The gradient has been a building block for various explanation methods for deep models, including gradient saliency map BID24 and its variants BID27 BID28 BID26, which apply a gradient-based attribution of the prediction to the input with nonlinear post-processings for visualization (e.g., normalizing and clipping by the 99 th percentile BID26 BID28). While one of the motivations for this work is the instability of gradient-based explanations BID14 BID0, we focus more generally on the fundamental problem of establishing robust derivatives. To simplify the exposition, the approaches are developed under the notation of FC networks with ReLU activations, which naturally generalizes to other settings. We first introduce notation, and then present our inference and learning algorithms. All the proofs are provided in Appendix A. We consider a neural network θ with M hidden layers and N i neurons in the i th layer, and the corresponding function f θ: R D → R L it represents. We use z i ∈ R Ni and a i ∈ R Ni to denote the vector of (raw) neurons and activated neurons in the i th layer, respectively. We will use x and a 0 interchangeably to represent an input instance from R D = R N0. With an FC architecture and ReLU activations, each a i and z i are computed with the transformation matrix W i ∈ R Ni×Ni−1 and bias DISPLAYFORM0 where [M] denotes the set {1, . . ., M}. We use subscript to further denote a specific neuron. To avoid confusion from other instancesx ∈ R D, we assert all the neurons z i j are functions of the specific instance denoted by x. The output of the network is a linear transformation of the last hidden layer DISPLAYFORM1 The output can be further processed by a nonlinearity such as softmax for classification problems. However, we focus on the piecewise linear property of neural networks represented by f θ (x), and leverage a generic loss function L(f θ (x), y) to fold such nonlinear mechanism. We use D to denote the set of training data (x, y), D x to denote the same set without labels y, and B,p (x):= {x ∈ R D : x − x p ≤} to denote the p -ball around x with radius.The activation pattern BID20 used in this paper is defined as: Definition 1. (Activation Pattern) An activation pattern is a set of indicators for neurons O = {o i ∈ {−1, 1} Ni |i ∈ [M]} that specifies the following functional constraints: DISPLAYFORM2 i j is called an activation indicator. Note that a point on the boundary of a linear region is feasible for multiple activation patterns. The definition fits the property of the activation pattern discussed in §2. We define ∇ x z i j to be the sub-gradient found by back-propagation using ∂a DISPLAYFORM3 DISPLAYFORM4, and the feasible set of the activation pattern is equivalent to DISPLAYFORM5 Remark 3. Lemma 2 characterizes each linear region of f θ as the feasible set S(x) with a set of linear constraints with respect to the input space R D, and thus S(x) is a convex polyhedron. The aforementioned linear property of an activation pattern equipped with the input space constraints from Lemma 2 yield the definition ofˆ x,p, the p margin of x subject to its activation pattern: DISPLAYFORM6 where S(x) can be based on any feasible activation pattern O on x; 3 therefore, ∂a DISPLAYFORM7 } is ensured with respect to some feasible activation pattern O. Note thatˆ x,p is a lower bound of the p margin subject to a derivative specification (i.e., a complete linear region).3.2.1 DIRECTIONAL VERIFICATION, THE CASES p = 1 AND p = ∞ We first exploit the convexity of S(x) to check the feasibility of a directional perturbation. Proposition 4. (Directional Feasibility) Given a point x, a feasible set S(x) and a unit vector ∆x, if ∃¯ ≥ 0 such that x +¯ ∆x ∈ S(x), then f θ is linear in {x + ∆x : 0 ≤ ≤¯}.The feasibility of x +¯ ∆x ∈ S(x) can be computed by simply checking whether x +¯ ∆x satisfies the activation pattern O in S(x). Proposition 4 can be applied to the feasibility problem on 1 -balls. Proposition 5. (1 -ball Feasibility) Given a point x, a feasible set S(x), and an 1 -ball B,1 (x) with extreme points x 1,..., DISPLAYFORM8 Proposition 5 can be generalized for an ∞ -ball. However, in high dimension D, the number of extreme points of an ∞ -ball is exponential to D, making it intractable. Instead, the number of extreme points of an 1 -ball is only linear to D (+ and − for each dimension). With the above methods to verify feasibility, we can do binary searches to find the certificates of the margins for directional perturbationsˆ x,∆x:= max {≥0:x+ ∆x∈S(x)} and 1 -ballsˆ x,1. The details are in Appendix B. The feasibility onˆ x,1 is tractable due to convexity of S(x) and its certification is efficient by a binary search; by further exploiting the polyhedron structure of S(x),ˆ x,2 can be certified analytically. Proposition 6. (2 -ball Certificate) Given a point x,ˆ x,2 is the minimum 2 distance between x and the union of hyperplanes DISPLAYFORM0 To compute the 2 distance between x and the hyperplane induced by a neuron z i j, we evaluate DISPLAYFORM1 where all the z i j can be computed by a single forward pass. 4 We will show in §4.1 that all the ∇ x z i j can also be computed efficiently by forward passes in parallel. We refer readers to FIG3 to see a visualization of the certificates on 2 margins. The sizes of linear regions are related to their overall number, especially if we consider a bounded input space. Counting the number of linear regions in f θ is, however, intractable due to the combinatorial nature of the activation patterns BID23. We argue that counting the number of linear regions on the whole space does not capture the structure of data manifold, and we propose to certify the number of complete linear regions (#CLR) of f θ among the data points D x, which turns out to be efficient to compute given a mild condition. Here we use #A to denote the cardinality of a set A, and we have Lemma 7. (Complete Linear Region Certificate) If every data point x ∈ D x has only one feasible activation pattern denoted as O(x), the number of complete linear regions of f θ among D x is upperbounded by the number of different activation patterns #{O(x)|x ∈ D x }, and lower-bounded by the number of different Jacobians #{J x f θ (x)|x ∈ D x }. In this section, we focus on methods aimed at maximizing the 2 marginˆ x,2, since it is (sub-)differentiable. We first formulate a regularization problem in the objective to maximize the margin: DISPLAYFORM0 However, the objective itself is rather rigid due to the inner-minimization and the reciprocal of ∇ x z i j 2. Qualitatively, such rigid loss surface hinders optimization and may attend infinity. To alleviate the problem, we do a hinge-based relaxation to the distance function similar to SVM. FORMULA12 is also optimal for Eq.. DISPLAYFORM1 If the condition in Lemma 8 does not hold, Eq. FORMULA12 is still a valid upper bound of Eq. due to a smaller feasible set. An upper bound of Eq. FORMULA12 can be obtained consequently due to the constraints: DISPLAYFORM2 We then derive a relaxation that solves a smoother problem by relaxing the squared root and reciprocal on the 2 norm as well as the hard constraint with a hinge loss to a soft regularization problem: DISPLAYFORM3 where C is a hyper-parameter. The relaxed regularization problem can be regarded as a maximum aggregation of TSVM losses among all the neurons, where a TSVM loss with only unannotated data D x can be written as: min DISPLAYFORM4 4 Concurrently, BID9 find that the p marginˆ x,p can be similarly computed as which pursues a similar goal to maximize the 2 margin in a linear model scenario, where the margin is computed between a linear hyperplane (the classifier) and the training points. DISPLAYFORM5 To visualize the effect of the proposed methods, we make a toy 2D binary classification dataset, and train a 4-layer fully connected network with 1) (vanilla) binary cross-entropy loss L(·, ·), 2) distance regularization as in Eq., and 3) relaxed regularization as in Eq. FORMULA14. Implementation details are in Appendix F. The ing piecewise linear regions and prediction heatmaps along with gradient ∇ x f θ (x) annotations are shown in FIG3. The distance regularization enlarges the linear regions around each training point, and the relaxed regularization further generalizes the property to the whole space; the relaxed regularization possesses a smoother prediction boundary, and has a special central region where the gradients are 0 to allow gradients to change directions smoothly. Since a linear region is shaped by a set of neurons that are "close" to a given a point, a noticeable problem of Eq. FORMULA14 is that it only focuses on the "closest" neuron, making it hard to scale the effect to large networks. Hence, we make a generalization to the relaxed loss in Eq. with a set of neurons that incur high losses to the given point. We denoteÎ(x, γ) as the set of neurons with top γ percent relaxed loss (TSVM loss) on x. The generalized loss is our final objective for learning RObust Local Linearity (ROLL) and is written as: DISPLAYFORM0 A special case of Eq. FORMULA17 is when γ = 100 (i.e. Î(x, 100) = I), where the nonlinear sorting step effectively disappears. Such simple additive structure without a nonlinear sorting step can stabilize the training process, is simple to parallelize computation, and allows for an approximate learning algorithm as will be developed in §4.2. Besides, taking γ = 100 can induce a strong synergy effect, as all the gradient norms ∇ x z i j 2 2 in Eq. between any two layers are highly correlated. The 2 marginˆ x,2 and the ROLL loss in Eq. demands heavy computation on gradient norms. While calling back-propagation |I| times is intractable, we develop a parallel algorithm without calling a single back-propagation by exploiting the functional structure of f θ.Given an activation pattern, we know that each hidden neuron z i j is also a linear function of x ∈ S(x). We can construct another linear network g θ that is identical to f θ in S(x) based on the same set of parameters but fixed linear activation functions constructed to mimic the behavior of f θ in S(x). Due to the linearity of g θ, the derivatives of all the neurons to an input axis can be computed by forwarding two samples: subtracting the neurons with an one-hot input from the same neurons with a zero input. The procedure can be amortized and parallelized to all the dimensions by feeding To analyze the complexity of the proposed approach, we assume that parallel computation does not incur any overhead and a batch matrix multiplication takes a unit operation. To compute the gradients of all the neurons for a batch of inputs, our perturbation algorithm takes 2M operations, while back-propagation takes DISPLAYFORM0 The detailed analysis is also in Appendix C. Despite the parallelizable computation of ∇ x z i j, it is still challenging to compute the loss for large networks in a high dimension setting, where even calling D + 1 forward passes in parallel as used in §4.1 is infeasible due to memory constraints. Hence we propose an unbiased estimator of the ROLL loss in Eq. FORMULA17 whenÎ(x, γ) = I. Note that (i,j)∈I C max(0, 1 − |z i j |) is already computable in one single forward pass. For the sum of gradient norms, we use the following equivalent decoupling: DISPLAYFORM0 where the summation inside the expectation in the last equation can be efficiently computed using the procedure in §4.1 and is in general storable within GPU memory. In practice, we can uniformly sample D (1 ≤ D D) input axes to have an unbiased approximation to Eq., where computing all the partial derivatives with respect to D axes only requires D + 1 times memory (one hot vectors and a zero vector) than a typical forward pass for x. The proposed algorithms can be used on all the deep learning models with affine transformations and piecewise linear activation functions by enumerating every neuron that will be imposed an ReLU-like activation function as z i j. They do not immediately generalize to the nonlinearity of maxout/max-pooling BID16 ) that also yields a piecewise linear function. We provide an initial step towards doing so in the Appendix E, but we suggest to use an average-pooling or convolution with large strides instead, since they do not induce extra linear constraints as maxpooling and do not in general yield significant difference in performance BID27. In this section, we compare our approach ('ROLL') with a baseline model with the same training procedure except the regularization ('vanilla') in several scenarios. All the reported quantities are computed on a testing set. Experiments are run on single GPU with 12G memory. Evaluation Measures: 1) accuracy (ACC), 2) number of complete linear regions (#CLR), and 3) p margins of linear regionsˆ x,p. We compute the marginˆ x,p for each testing point x with p ∈ {1, 2}, and we evaluateˆ x,p on 4 different percentiles P 25, P 50, P 75, P 100 among the testing data. DISPLAYFORM0 Figure 2: Parameter analysis on MNIST dataset. P 50 ofˆ x,2 is the median ofˆ x,2 in the testing data. We use a 55, 000/5, 000/10, 000 split of MNIST dataset for training/validation/testing. Experiments are conducted on a 4-layer FC model with ReLU activations. The implementation details are in Appendix G. We report the two models with the largest medianˆ x,2 among validation data given the same and 1% less validation accuracy compared to the baseline model. The are shown in TAB1. The tuned models have γ = 100, λ = 2, and different C as shown in the table. The condition in Lemma 7 for certifying #CLR is satisfied with tight upper bound and lower bound, so a single number is reported. Given the same performance, the ROLL loss achieves about 10 times larger margins for most of the percentiles than the vanilla loss. By tradingoff 1% accuracy, about 30 times larger margins can be achieved. The Spearman's rank correlation betweenˆ x,1 andˆ x,2 among testing data is at least 0.98 for all the cases. The lower #CLR in our approach than the baseline model reflects the existence of certain larger linear regions that span across different testing points. All the points inside the same linear region in the ROLL model with ACC= 98% have the same label, while there are visually similar digits (e.g., 1 and 7) in the same linear region in the other ROLL model. We do a parameter analysis in Figure 2 with the ACC and P 50 ofˆ x,2 under different C, λ and γ when the other hyper-parameters are fixed. As expected, with increased C and λ, the accuracy decreases with an increased 2 margin. Due to the smoothness of the curves, higher γ values reflect less sensitivity to hyper-parameters C and λ. To validate the efficiency of the proposed method, we measure the running time for performing a complete mini-batch gradient descent step (starting from the forward pass) on average. We compare 1) the vanilla loss, 2) the full ROLL loss (γ = 100) in Eq. computed by back-propagation, 3) the same as 2) but computed by our perturbation algorithm, and 4) the approximate ROLL loss in Eq. computed by perturbations. The approximation is computed with 3 = D/256 samples. The are shown in TAB2. The accuracy and 2 margins of the approximate ROLL loss are comparable to the full loss. Overall, our approach is only twice slower than the vanilla loss. The approximate loss is about 9 times faster than the full loss. Compared to back-propagation, our perturbation algorithm achieves about 12 times empirical speed-up. In summary, the computational overhead of our method is minimal compared to the vanilla loss, which is achieved by the perturbation algorithm and the approximate loss. Table 4: ResNet on Caltech-256. Here ∆(x, x, y) denotes 1 gradient distortion ∇ x f θ (x) y − ∇ x f θ (x) y 1 (the smaller the better for each r percentile P r among the testing data). DISPLAYFORM1 We train RNNs for speaker identification on a Japanese Vowel dataset from the UCI machine learning repository BID11 with the official training/testing split. 6 The dataset has variable sequence length between 7 and 29 with 12 channels and 9 classes. We implement the network with the state-of-the-art scaled Cayley orthogonal RNN (scoRNN) , which parameterizes the transition matrix in RNN using orthogonal matrices to prevent gradient vanishing/exploding, with LeakyReLU activation. The implementation details are in Appendix H. The reported models are based on the same criterion as §5.1.The are reported in TAB3. With the same/1% inferior ACC, our approach leads to a model with about 4/20 times larger margins among the percentiles on testing data, compared to the vanilla loss. The Spearman's rank correlation betweenˆ x,1 andˆ x,2 among all the cases are 0.98. We also conduct sensitivity analysis on the derivatives by findingˆ x,∆x along each coordinate ∆x ∈ ∪ i ∪ 12 j=1{−e i,j, e i,j} (e i,j k,l = 0, ∀k, l except e i,j i,j = 1), which identifies the stability bounds [ˆ x,−e i,j,ˆ x,e i,j] at each timestamp i and channel j that guarantees stable derivatives. The visualization using the vanilla and our ROLL model with 98% ACC is in FIG4. Qualitatively, the stability bound of the ROLL regularization is consistently larger than the vanilla model. We conduct experiments on Caltech-256 BID18, which has 256 classes, each with at least 80 images. We downsize the images to 299 × 299 × 3 and train a 18-layer ResNet with initializing from parameters pre-trained on ImageNet BID10 ). The approximate ROLL loss in Eq. is used with 120 random samples on each channel. We randomly select 5 and 15 samples in each class as the validation and testing set, respectively, and put the remaining data into the training set. The implementation details are in Appendix I.Evaluation Measures: Due to high input dimensionality (D ≈ 270K), computing the certificateŝ x,1,ˆ x,2 is computationally challenging without a cluster of GPUs. Hence, we turn to a samplebased approach to evaluate the stability of the gradients f θ (x) y for the ground-truth label in a local region with a goal to reveal the stability across different linear regions. Note that evaluating the gradient of the prediction instead is problematic to compare different models in this case. Given labeled data (x, y), we evaluate the stability of gradient ∇ x f θ (x) y in terms of expected 1 distortion (over a uniform distribution) and the maximum 1 distortion within the intersection B,∞ (x) = B,∞ (x) ∩ X of an ∞ -ball and the domain of images X = 299×299×3. The 1 gradient distortion is defined as ∆(x, x, y):= ∇ x f θ (x) y − ∇ x f θ (x) y 1. For a fixed x, we refer to Figure 4: Visualization of the examples in Caltech-256 that yield the P 50 (above) and P 75 (below) of the maximum 1 gradient distortions among the testing data on our ROLL model. The adversarial gradient is found by maximizing the distortion ∆(x, x, y) over the ∞ -norm ball with radius 8/256. the maximizer ∇ x f θ (x) y as the adversarial gradient. Computation of the maximum 1 distortion requires optimization, but gradient-based optimization is not applicable since the gradient of the loss involves the Hessian ∇ 2 x f θ (x) y which is either 0 or ill-defined due to piecewise linearity. Hence, we use a genetic algorithm BID33 for black-box optimization. Implementation details are provided in Appendix J. We use 8000 samples to approximate the expected 1 distortion. Due to computational limits, we only evaluate 1024 random images in the testing set for both maximum and expected 1 gradient distortions. The ∞ -ball radius is set to 8/256.The along with precision at 1 and 5 (P@1 and P@5) are presented in Table 4. The ROLL loss yields more stable gradients than the vanilla loss with marginally superior precisions. Out of 1024 examined examples x, only 40 and 42 gradient-distorted images change prediction labels in the ROLL and vanilla model, respectively. We visualize some examples in Figure 4 with the original and adversarial gradients for each loss. Qualitatively, the ROLL loss yields stable shapes and intensities of gradients, while the vanilla loss does not. More examples with integrated gradient attributions BID28 are provided in Appendix K. This paper introduces a new learning problem to endow deep learning models with robust local linearity. The central attempt is to construct locally transparent neural networks, where the derivatives faithfully approximate the underlying function and lends itself to be stable tools for further applications. We focus on piecewise linear networks and solve the problem based on a margin principle similar to SVM. Empirically, the proposed ROLL loss expands regions with provably stable derivatives, and further generalize the stable gradient property across linear regions. DISPLAYFORM0, and the feasible set of the activation pattern is equivalent to DISPLAYFORM1 Ifx is feasible to the fixed activation pattern o 1 j, it is equivalent to thatx satisfies the linear constraint DISPLAYFORM2 in the first layer. Assumex has satisfied all the constraints before layer i > 1. We know if all the previous layers follows the fixed activation indicators, it is equivalent to rewrite each DISPLAYFORM3 Then for j ∈ [N i], it is clear that z DISPLAYFORM4 The proof follows by induction. Proposition 4. (Directional Feasibility) Given a point x, a feasible set S(x) and a unit vector ∆x, if ∃¯ ≥ 0 such that x +¯ ∆x ∈ S(x), then f θ is linear in {x + ∆x : 0 ≤ ≤¯}.Proof. Since S(x) is a convex set and x, x +¯ ∆x ∈ S(x), {x + ∆x : 0 ≤ ≤¯} ⊆ S(x). Proposition 5.(1 -ball Feasibility) Given a point x, a feasible set S(x), and an 1 -ball B,1 (x) with extreme points DISPLAYFORM0 Proof. S(x) is a convex set and DISPLAYFORM1. Hence, ∀x ∈ B,1 (x), we know x is a convex combination of x 1,..., x 2D, which implies x ∈ S(x). Proposition 6. (2 -ball Certificate) Given a point x,ˆ x,2 is the minimum 2 distance between x and the union of hyperplanes DISPLAYFORM0 Proof. Since S(x) is a convex polyhedron and x ∈ S(x), B,2 (x) ⊆ S(x) is equivalent to the statement: the hyperplanes induced from the linear constraints in S(x) are away from x for at least in 2 distance. Accordingly, the minimizing 2 distance between x and the hyperplanes is the maximizing distance that satisfies B,2 (x) ⊆ S(x). FORMULA12 is also optimal for Eq.. DISPLAYFORM1 Proof. The proof is based on constructing a neural network feasible in Eq. that has the same loss as the optimal model in Eq.. Since the optimum in Eq. FORMULA12 DISPLAYFORM2 C PARALLEL COMPUTATION OF THE GRADIENTS BY LINEARITYWe denote the corresponding neurons z i j and a DISPLAYFORM3 givenx, highlighting its functional relationship with respect to a new inputx. The network g θ is constructed with exactly the same weights and biases as f θ but with a well-crafted linear activation function o i j = max(0, o i j) ∈ {0, 1}. Note that since o is given,ô is fixed. Then each layer in g θ is represented as: DISPLAYFORM4 We note thatâ i (x),ô i, andẑ i (x) are also functions of x, which we omitted for simplicity. Since the new activation functionô is fixed given x, effectively it applies the same linearity toẑ DISPLAYFORM5 We then do the following procedure to collect the partial derivatives with respect to an input axis k: 1) feed a zero vector 0 to g θ to getẑ i j and 2) feed a unit vector e k on the axis to getẑ i j (e k). Then the derivative of each neuron z i j with respect to x k can be computed aŝ DISPLAYFORM6 where the first equality comes from the linearity ofẑ i j (x) with respect to anyx. With the procedure, the derivative of all the neurons to an input dimension can be computed with 2 forward pass, which can be further scaled by computing all the gradients of z To analyze the complexity of the proposed approach, we assume that parallel computation does not incur any overhead and a batch matrix multiplication takes a unit operation. In this setting, a typical forward pass up to the last hidden layer takes M operations. To compute the gradients of all the neurons for a batch of inputs, our perturbation algorithm first takes a forward pass to obtain the activation patterns for the batch of inputs, and then takes another forward pass with perturbations to obtain the gradients. Since both forward passes are done up to the last hidden layers, it takes 2M operations in total. In contrast, back-propagation cannot be parallelized among neurons, so computing the gradients of all the neurons must be done sequentially. For each neuron z i j, it takes 2i operations for backpropagation to compute its gradient (i operations for each of the forward and backward pass). Hence, it takes M i=1 2iN i operations in total for back-propagations to compute the same thing. We can exploit the chain-rule of Jacobian to do dynamic programming for computing all the gradients of z i j. Note that all the gradients of z i j in the i th layer can be represented by the Jacobian J x z i.Then 1) For the first layer, the Jacobian is trivially J x z 1 = W 1. 2) We then iterate higher layers with the Jacobian of previous layers by chain rules J x z i = W i J z i−1 a i−1 J x z i−1, where J x z i−1 and W i are stored and J z i−1 a i−1 is simply the Jacobian of activation function (a diagonal matrix with 0/1 entries for ReLU activations). The dynamic programming approach is efficient for fully connected networks, but is inefficient for convolutional layers, where explicitly representing the convolutional operation in the form of linear transformation (∈ R Ni+1×Ni) is expensive. Here we only make an introductory guide to the derivations for maxout/max-pooling nonlinearity. The goal is to highlight that it is feasible to derive inference and learning methods upon a piecewise linear network with max-pooling nonlinearity, but we do not suggest to use it since a max-pooling neuron would induce new linear constraints; instead, we suggest to use convolution with large strides or average-pooling which do not incur any constraint. For simplicity, we assume the target network has a single nonlinearity, which maps N neurons to 1 output by the maximum a DISPLAYFORM0 Then we can define the corresponding activation pattern o = o 1 1 ∈ [N] as which input is selected: DISPLAYFORM1 It is clear to see once an activation pattern is fixed, the network again degenerates to a linear model, as the nonlinearity in the max-pooling effectively disappears. Such activation pattern induces a feasible set in the input space where derivatives are guaranteed to be stable, but such representation may have a similar degenerate case where two activation patterns yield the same linear coefficients. The feasible set S(x) of a feasible activation pattern O = {o 1 1} at x can be derived as: DISPLAYFORM2 To check its correctness, we know that Eq. FORMULA0 is equivalent to DISPLAYFORM3 where the linear constraints are evident, and the feasible set is thus again a convex polyhedron. As a , all the inference and learning algorithms can be applied with the linear constraints. Clearly, for each max-pooling neuron with N inputs, it will induce N − 1 linear constraints. The FC model consists of M = 4 fully-connected hidden layers, where each hidden layer has 100 neurons. The input dimension D is 2 and the output dimension L is 1. The loss function L(f θ (x), y) is sigmoid cross entropy. We train the model for 5000 epochs with Adam optimizer, and select the model among epochs based on the training loss. We fix C = 5, and increase λ ∈ {10 −2, . . ., 10 2} for both the distance regularization and relaxed regularization problems until the ing classifier is not perfect. The tuned λ in both cases are 1. The data are normalized with µ = 0.1307 and σ = 0.3081. We first compute the marginˆ x,p in the normalized data, and report the scaled margin σˆ x,p in the table, which reflects the actual margin in the original data space since DISPLAYFORM0 so the reported margin should be perceived in the data space of X = 28×28.We compute the exact ROLL loss during training (i.e., approximate learning is not used). The FC model consists of M = 4 fully-connected hidden layers, where each hidden layer has 300 neurons. The activation function is ReLU. The loss function L(f θ (x), y) is a cross-entropy loss with soft-max performed on f θ (x). The number of epochs is 20, and the model is chosen from the best validation loss from all the epochs. We use stochastic gradient descent with Nesterov momentum. The learning rate is 0.01, the momentum is 0.5, and the batch size is 64.Tuning: We do a grid search on λ, C, γ, with λ ∈ {2 −3, . . ., 2 2}, C ∈ {2 −2, . . ., 2 3}, γ ∈ {max, 25, 50, 75, 100} (max refers to Eq. FORMULA14), and report the models with the largest validation 2,50 given the same and 1% less validation accuracy compared to the baseline model (the vanilla loss). The data are not normalized. We compute the exact ROLL loss during training (i.e., approximate learning is not used). The representation is learned with a single layer scoRNN, where the state embedding from the last timestamp for each sequence is treated as the representation along with a fully-connected layer to produce a prediction as f θ (x). We use LeakyReLU as the activation functions in scoRNN. The dimension of hidden neurons in scoRNN is set to 512. The loss function L(f θ (x), y) is a cross-entropy loss with soft-max performed on f θ (x). We use AMSGrad optimizer BID21. The learning rate is 0.001, and the batch size is 32 (sequences).Tuning: We do a grid search on λ ∈ {2 −6, . . ., 2 3}, C ∈ {2 −5, . . ., 2 7}, and set γ = 100. The models with the largest testingˆ 2,50 given the same and 1% less testing accuracy compared to the baseline model (the vanilla loss) are reported. along each channel. We train models on the normalized images, and establish a bijective mapping between the normalized distance and the distance in the original space with the trick introduced in Appendix G. The bijection is applied to our sample-based approach to compute We download the pre-trained ResNet-18 from PyTorch , and we revise the model architecture as follows: 1) we replace the max-pooling after the first convolutional layer with average-pooling to reduce the number of linear constraints (because max-pooling induces additional linear constraints on activation pattern, while average-pooling does not), and 2) we enlarge the receptive field of the last pooling layer such that the output will be 512 dimension, since ResNet-18 is originally used for smaller images in ImageNet data (most implementations use 224 × 224 × 3 dimensional images for ImageNet while our data has even higher dimension 299 × 299 × 3). DISPLAYFORM0 We train the model with stochastic gradient descent with Nesterov momentum for 20 epochs. The initial learning rate is 0.005, which is adjusted to 0.0005 after the first 10 epochs. The momentum is 0.5. The batch size is 32. The model achieving the best validation loss among the 20 epochs is selected. Tuning: Since the training is computationally demanding, we first fix C = 8, use only 18 samples (6 per channel) for approximate learning, and tune λ ∈ {10 −6, 10 −5, . . .} until the model yields significantly inferior validation accuracy than the vanilla model. Afterwards, we fix λ to the highest plausible value (λ = 0.001) and try to increase C ∈ {8, 80, . . .}, but we found that C = 8 is already the highest plausible value. Finally, we train a model with 360 random samples (120 per channel) for approximate learning to improve the quality of approximation. We implement a genetic algorithm (GA) BID33 with 4800 populations P and 30 epochs. Initially, we first uniformly sample 4800 samples (called chromosome in GA literature) in the domain B,∞ (x) ∩ X for P. In each epoch, 1. ∀c ∈ P, we evaluate the 1 distance of its gradient from that of the target x: DISPLAYFORM0 2. (Selection) we sort the samples based on the 1 distance and keep the top 25% samples in the population (denoted asP).3. (Crossover) we replace the remaining 75% samples with a random linear combination of a pair (c, c) fromP as: DISPLAYFORM1 4. (Projection) For all the updated samples c ∈ P, we do an ∞ -projection to the domain B,∞ (x) ∩ X to ensure the feasibility. Finally, the sample in P that achieves the maximum 1 distance is returned. We didn't implement mutation in our GA algorithm due to computational reasons. For the readers who are not familiar with GA, we comment that the crossover operator is analogous to a gradient step where the direction is determined by other samples and the step size is determined randomly. We visualize the following images:• Original image.• Original gradient: the gradient on the original image.• Adversarial gradient: the maximum 1 distorted gradient in B,∞ (x) ∩ X.• Image of adv. gradient: the image that yields adversarial gradient.• Original int. gradient: the integrated gradient attribution BID28 on the original image.• Adversarial int. gradient: the integrated gradient attribution BID28 on the'image of adv. gradient'. Note that we didn't perform optimization to find the image that yields the maximum distorted integrated gradient. We follow a common implementation in the literature BID26 BID28 to visualize gradients and integrated gradients by the following procedure:1. Aggregating derivatives in each channel by summation.2. Taking absolute value of aggregated derivatives.3. Normalizing the aggregated derivatives by the 99 th percentile 4. Clipping all the values above 1.After this, the derivatives are in the range 299×299, which can be visualized as a gray-scaled image. The original integrated gradient paper visualizes the element-wise product between the grayscaled integrated gradient and the original image, but we only visualize the integrated gradient to highlight its difference in different settings since the underlying images (the inputs) are visually indistinguishable. We visualize the examples in Caltech-256 dataset that yield the P 25, P 50, P 75, P 100 (P r denotes the r th percentile) of the maximum 1 gradient distortions among the testing data on our ROLL model in Figure 5 and 6, where the captions show the exact values of the maximum 1 gradient distortion for each image. Note that the exact values are slightly different from Table 4, because each percentile in Table 4 is computed by an interpolation between the closest ranks (as in numpy.percentile), and the figures in Figure 5 and 6 are chosen from the images that are the closest to the percentiles. Figure 5: Visualization of the examples in Caltech-256 dataset that yield the P 25 (above) and P 50 (below) of the maximum 1 gradient distortions among the testing data on our ROLL model. For the vanilla model, the maximum 1 gradient distortion ∆(x, x, y) is equal to 893.3 for'Projector' in Figure 5g Figure 6: Visualization of the examples in Caltech-256 dataset that yield the P 75 (above) and P 100 (below) of the maximum 1 gradient distortions among the testing data on our ROLL model. For the vanilla model, the maximum 1 gradient distortion ∆(x, x, y) is equal to 1547.1 for'Bear' in Figure 6g and 5473.5 for'Rainbow' in Figure 6q. For the ROLL model, the maximum 1 gradient distortion ∆(x, x, y) is equal to 1367.9 for'Bear' in Figure 6e and 3882.8 for'Rainbow' in Figure 6o. | A scalable algorithm to establish robust derivatives of deep networks w.r.t. the inputs. | 1,203 | scitldr |
Recent years have witnessed two seemingly opposite developments of deep convolutional neural networks (CNNs). On one hand, increasing the density of CNNs by adding cross-layer connections achieve higher accuracy. On the other hand, creating sparsity structures through regularization and pruning methods enjoys lower computational costs. In this paper, we bridge these two by proposing a new network structure with locally dense yet externally sparse connections. This new structure uses dense modules, as basic building blocks and then sparsely connects these modules via a novel algorithm during the training process. Experimental demonstrate that the locally dense yet externally sparse structure could acquire competitive performance on benchmark tasks (CIFAR10, CIFAR100, and ImageNet) while keeping the network structure slim. Under the inspiration of bridging these two trends and search more efficient network structures, our paper explores methods which directly introduce sparsity into network structure thus avoid pruningafter-training strategy. In neural science, papers (e.g. BID4 BID35 BID10) concentrating on the brain structure reveal that neuron connections in brain perform a locally dense but externally sparse property as paper BID4 shown, that the closer two regions are, the denser the connections between them will be. Visual cortex papers BID2 show that while sensory information arrives at the cortex, it is fed up through hierarchy regions from primary area V1 up to higher areas such as V2, V4 and IT. Inside of each cor-tex layer, tightly packed pyramidal cells consist of basic locally dense structures in the brain. While our brain has been trained over time, internal densely connected modules will form a few long distance and cross-level connections to transfer information to higher hierarchy. Modular structures have shown vital importance in our brain behaviors such as specializing in information processing BID9, performing focal functions BID1, and supporting complex neural dynamics. In this case, instead of creating local density by pruning redundancy on the trained model, we perform local density by prefixing untrained dense modules as tightly packed neuron cell in the human brain and let it evolving both the weights of itself and the sparse connection between them via training. Since DenseNet has reached theoretical densest connection status, we use a similar dense block structure with growth rate k, but only with very narrow channels in each block. The growth rate k BID16 ) is a hyper parameter in Densely Connected structures, which denotes growth rate of the input feature map scale when network goes deeper. Previous methods constructing neural modules with structural sparsity (e.g. BID39 BID34) are mostly empirically constructing the sparse connection between modules. To give more convincing guidance of forming sparse connections, we design a genetic training strategy to search an optimized connection matrix. This algorithm treats the connection matrix as the gene, and only reserves mutated individual with the best performance among others. Actually, this strategy consistently changes the input feature groups during training process, and by always counting new feature distribution in, this strategy could take similar effect as drop-out methods, thus make the model robust. Moreover, besides merely creating parallel connections between modules, our algorithm could create long-distance connections between input module and output module by a transit layer. The experiment demonstrate that evolving locally dense but externally sparse connections could maintain competitive performance on benchmark image datasets while using compared slim network structures. By comparison experiments, we reveal contribution proportion on the final performance of each specific connection, and by that give the principle of design sparse connections between dense modules. The main contribution of this paper is as follows:• We enhance the hierarchical structure by utilizing the property of locally dense but externally sparse connections.• Instead of empirically constructing module connections, we design an evolving training algorithm to search optimized connection matrix.• We let each module choose output flow globally rather than simply creating parallel streams between modules so that the feature could flow to final layer through various depth.• We give a detailed analysis of how different sparse connections and different module properties will contribute to the final performance. Moreover, We reveal contribution proportion on the final performance of each connection and each module by several contrast experiments, and by that give principle of design sparse connections between dense modules. Network architectures are becomming denser. The exploration of network architectures has been an important foundation for all Deep Learning tasks. At the early period of deep learning, increasing the depth of a network might promise a good as the network structure varied from LeNet to VGG (e.g. BID20 BID18 BID29). Since people realize that the increasing depth of the network amplifies problem such as over-fitting and gradient-vanishing BID3, parallel structures (e.g. BID40) BID34 ) and densely connected layers BID16 have been introduced to increase network capacity. As DenseNet reaches the densest connection method inside each dense block, we refer to the dense block in this paper while constructing internal densely connected modules. Although our paper does not merely concentrate on highest benchmark accuracy, but also hierarchy structure and global sparsity, we still acquire competitive on benchmark datasets using slim network structure. Deep neural network compression. Besides increasing model capacity, deep neural network compression is another activate domain concentrating on acquire slim model by eliminating network redundancy. These methods could be roughly summarized as three basic aspects as fol-lows: 1. Numerical approximation of kernels, which includes binarization BID7 BID25, quantization BID41, weight sharing or coding method BID11 and mainly use numerical method to approximate kernel with smaller scale; 2. Sparse regularization on kernels, which mainly prune connections based on regularization on kernels, such as weights/channel pruning BID15 BID23 and structure sparsity learning BID22; 3. Decomposition of kernels, which mainly use smaller groups of low-rank kernel instead of a larger whole kernel, such as BID8 BID17 BID6 and BID39. These papers mostly put an emphasis on model sparsity rather than capacity. Our paper combines the global sparsity and locally dense feature together to maintain high capacity while making the network structure slim and separable. Evloving algorithm on nerual network. Many early works have developed methods that evolve both topologies and weights BID0 BID5 BID24 BID32. Most of them implement in the area of reinforcement learning. Evolutionary methods for searching better network structures have risen again recently on reinforcement domain BID36 BID28. Also, it still shows great potential for image classification BID26. Google has proposed a state-of-the-art deep neural network structure NasNet BID42, and reaches the best performance so far by searching the best architecture on large scale. However, the huge scale of these networks with the searching parameters method still remains a problem. Our paper emphasizes on structural density & sparsity. The evolving algorithm is only used to search sparse connections during the training process. Convolution operation could be understood as the projection between different channels of feature maps. Channel-wise mapping between input and output feature map could be expressed as connections. In convolution operation, the kernel could be written as: j * i * m * n, where j, i denotes output channels and input channels, M * N denotes the size of filter W. In order to separately represents the connection between each channel pair (i, j) and illustrate concepts of'local', we use Frobenius norm representation in Eq. of each filter to represent the importance of channels as in FIG0: Separately represents connections between output channels and input channels. Brightened part means the F-norm between these specific channels is significantly large. Dark area shows that the model has significant channel wise redundancy. DISPLAYFORM0 Under this representation, we could calculate feature map of typical convolution kernel and show in FIG0. As a convolution kernel could be considered as mapping input feature from i channels to j channels, dark parts suggest that norm of a size m * n filter is compared small, which is also called redundancy in network compression domain. Besides, inspired by the brain structure shown in neural science papers, making kernels locally dense connected could significantly save parameters. In this case, the kernel could be decomposed as it shows in FIG1. Obviously, this decomposition method sacrifice a large number of connections in each layer. In order to maintain high model capacity after decomposition, we create sparse connections between these modules as below. illustrates two small kernels with shape 3 * 2 * m * n after ideal decomposition. Especially, grey color denotes the connections between channels has been cut off. Note that under this example, decomposition saved 18 * m * n parameters. To create locally density, different from the traditional method that eliminates redundancy by pruning channels on a pretrained kernel, we would like the modularity forming along training process perusing. In that case, there exist two major ways to form local density, the first is placing L2 regularization in loss function to regulate weights distributed along diagonal in the adjacent matrix, the second and what we have chosen is to prefix some densely connected modules and explore the sparse connections between them. In order to acquire locally density both in depth and width wise, we stack several'narrow' convolution kernels into a dense module as shown in Fig. 3. This structure also uses a bottleneck layer with growth rate k BID16 which consists of sequential layers {BN -1*1conv -BN -3*3conv} (with zero padding 1). The connection strategy between bottleneck layers is also densely connected, and the connectivity could be presented as DISPLAYFORM0, where H l represents nonlinear operation on feature map in layer l and (x 0, x 1, x 2, ...x n) represents the concatenation of all previous layers. It should be noticed that inside each dense module, the feature map size is constant, but channels will grow rapidly as layer depth and growth rate increase. To control the model scale, we use a transit layer introduced in DenseNet BID16 to reduce channels of output feature map to half of original number. In this paper, we take a densely connected module as a basic element. Figure 3: A structure example for a prefix dense module as shown above, where yellow layer represents several densely connect bottleneck layers (it means all output has a direct connection to the output layer). The detailed structure used in a bottleneck layer shown left. After the final layer, the green layer represents a transition layer to control the feature map size. Dense blocks depth in our experiment usually varied from 6-20 layers. While dealing with sparse connections between modules, a significant problem for us is that feature map size will decrease after it flows through a dense module. In order to create sparse connections, here, we firstly figure out connection methods within the same distance, secondly we figure out methods of dealing long distance connections; finally, we could represent sparse connections in the matrix. To figure out the influence of the connection method, we implement a contrast experiment as it shows in Experiment of connection method part 4.1. Experiment demonstrate that although concatenation method caused a larger model, its accuracy on CIFAR10 does not show an absolute advantage over the addition method. Moreover, since we attempt to apply an evolution method to find optimized external connections, the concatenating need more operations when changing the input features. In that case, we select the addition method for sparse connections. Long Distance Connections: As we mentioned above, the feature map size will change since it flows through dense modules. In order to make it possible for making long-distance connections between different depth, we use a transfer layer with {1*1conv -average pooling} structure to fit the feature map into the dense module requirement. Notice that {1*1conv} layer reform the feature map channels while average pooling changes the feature map size to fit requirement. It should be noticed that, in this way, for each module, they could have various network depth. Represent sparse connection: For better analysis of sparse connections, we use the adjacent matrix to represent connections as FIG2. If there exists a connection, we set element value correspond to that index in connection matrix to be 1, otherwise 0. Here we could simply define the density as DISPLAYFORM0 sum(Cmax), where C i denotes the current connection matrix, C max denotes the connection matrix under the fully connected condition, sum means the summation value of all elements in the matrix. In this paper, we only used directed graphs and down sampling connections, so the lower left of the matrix should always be zero. Fig. (b), red rectangle area denotes connections with distance 1, green rectangle denotes connections with distance 2, blue area denotes connections with distance 3 3.4 EVOLUTION ALGORITHM TO SEARCH IMPORTANT CONNECTIONS One crucial problem in creating sparse topology connections is that there has not been a convincing theory on what could be called an efficient connection. In that case, we decide to make the neural network searching optimized sparse connection by itself. In this paper we use a genetic algorithm BID30 ) to search the proper connections. We encoding connection matrix as the gene for genetic algorithm. In each iteration, the genetic algorithm generate several new'individuals' with genes from mutation of the best'individual' in last iteration. The set of generated'individuals' is called'population'. Genetic algorithm evolves by select best performance individual Encoding: Inspired by the genetic algorithm, evolving methods need to have a good encoding to describe object features. Here we take the adjacent matrix to represent connection topology during training. In implementation details, we use a connection list to attach each module to avoid wasting storage. Initial state: As we do not use pre-trained modules, we randomly initialize the weight value of modules at the first iteration of the training process. Since a deep neural network needs a long time to train, restricted to our computation capacity, we set the population between 2 to 3 individuals. For the connection matrix of the initial state, we set it only have parallel direct connections as shown in Initial state denotes the initial connections P. As we set before first iteration P best = P init, based on P best we generate 2 individual below. All together these 3 individual form the population to be trained simultaneously in iteration 1. Then, we choose the individual with the best performance, and based on that we form population for iteration 2. Follow this principle we maintain network evolving. Evolution Strategy: We define the connection matrix of the initial individual state as P init; best performance individual of the previous iteration as P best, and others as P i at beginning of each iteration, the evolution of connections could be defined as: DISPLAYFORM1 where we choose P best as input of mutation function G, then generate several mutation individuals P 1, P 2... based on P best. Then we treat the set of P best, P 1, P 2... as population in this iteration. It means the best performance individual will remain to next iteration, and based on it we mutate new individuals. What exactly mutation function G does is that based on the input connection matrix, randomly pick two possible connections and change the connectivity of it. It means that, if we randomly pick an unconnected connection, we set it connected, and for already connected connection, we set it disconnected. Different from methods used in the NEAT algorithm BID32 ) which forces connections denser over time, our strategy has a larger probability to become denser if density is less than 0.5, and it has a larger probability to become sparser if density is large than 0.5.After the population of each iteration has been generated, we need to separately train each individual for a complete epoch and make it a fair comparison between each individual. In implementing detailwise, before start training, we set a checkpoint for all status and parameters and make sure that all individuals under comparison start from checkpoints. After the training process, only the individual with the best performance will remain, and based on that, we can generate the population of the next iteration. The whole process shows in Algorithm 1 and Fig P init ← Initial Connection Matrix 3: DISPLAYFORM2 for n iterations do 5: DISPLAYFORM3 checkpoint ← Model at P best 7:for k iterations do 8: DISPLAYFORM4 train P k 10:if P k.accuracy > P best.accuracy then 11: DISPLAYFORM5 end if end for end for Return P best 16: end procedure We firstly do a contrast experiment on Concatenation vs. Addition method to figure out which connection method we will use. As the test object is the connection method, we prefix a group of sparse connections and control all other training strategy and environment exactly the same, then separately train the network on the CIFAR10 dataset. We run our experiments on NVIDIA K80 GPU device. The test is shown as FIG5. Fig. (b) denotes an example of a random chosen P 1 and Fig (a) denotes the train&test curve correspond to it. Fig. (c) shows the comparison on three random chosen situation. We could observe that the addition method only have a negligible difference with the concatenation method. Although the curve of addition method seems to have more fluctuations, it only has a negligible difference (we use the difference in highest accuracy on the test set to represent difference) with the concatenation method. As we mentioned before, the addition method is faster and more convenient for changeable feature map size. We choose addition method in later experiments. It should be also noticed that, the accuracy step jumps in the figures are caused by learning rate change for all experiments in this section. As we use the same learning rate change strategy mentioned in section 4.2 for all experiments, all step jumps in our experiments happen at the same position. For prefixed dense modules, we set it with 4 different depth, where each depth has 3 modules. The total of 12 modules has the growth rate of 32, the modules in depth 1,2,3,4 respectively have 6,12,24,16 layers. Then we run several sparse connection evolving algorithms also training on CI-FAR10 dataset on NVIDIA AWS P3.x2large instance. We set the total iteration number to be 160, with weight decay of 5e-4. We use SDG with momentum 0.9 for gradient decsent. The learning rate strategy is the same as most of the papers that during epoch 0-90 the learning rate is 0.1; during 90-140 learning rate is 0.01; and during 140-160 learning rate is 0.001. It should be noticed that changing the learning rate will lead to accuracy'step jumps' such as FIG5 shows. It's a common phenomenon. Restricted to our computation power, we set the number of individuals generated in each iteration to be 2. The training curve of P best shown as According to the repeatable experiments, we could see that although randomness of forming the first generation of populations may lead to variation and fluctuation in the early period of the testing performance curve, the training curve will finally converge to the same trend. This shows the feasibility of our algorithm. Based on these experiments we found that the optimized connection matrix is not unique to achieve good performance. However, we could still find some similarity between those connection matrices in the experiment (Fig. 8) which could reach high accuracy. It denotes that the modules with shallow depth are more likely to form a long-distance connection, which means the distance between the input feature map and output are shorten under that situation. This perfectly fits a current trend observed by various other papers BID16 BID27 BID13 BID6 BID33 BID6 that skip/direct connections are important.above. The shown in FIG8. Clearly, the networks with smaller growth rate have higher test accuracy and more flatten curve shape compared to those with larger growth rates at the earlier period of training. It means that the modules with smaller scale are easier to train while evolving sparse connections. We can also see that although modules with smaller growth rates converge really fast and could get a good after 90 epoch, the final test accuracy is not as high as those modules with larger growth rate. This phenomenon, in fact, proves an empirical that neural network redundancy is a necessary part of achieving high performance on test accuracy. However, experiment also demonstrate that the network redundancy is not the'larger the better'. As it shows in FIG8, after the growth rate is larger than 32, the test accuracy will not increase anymore. It is also rational because if the capacity of each module is too large, an unstable input feature may make the network harder to train. In another side, the increasing growth rate, which leads to the increasing of model scale, increases the risk of over-fitting. Although our paper emphasizes on how sparse connections will change the model performance, we still give performance scores on the benchmark dataset as shown in Tab1, Tab2. Since the aim of this paper is to obtain slim structures while keeping the model's capacity and achieve separable network structures, the test accuracy on both ImageNet and CIFAR is not that high compared to the state-of-the-art model. However, we still get a competitive on both datasets. After the evolving training algorithm gives optimal sparse connections, we wonder which sets of connections play a more important role in the whole network flow. We separately cut off one sparse connection each time and test the remaining accuracy on CIFAR10 dataset. Then we come up with a matrix that suggests how much accuracy decreasing from losing each connection as shown in FIG0 In experiment , the red rectangle area denotes the direct connections; the green and blue rectangle area denote the long-distance connections. According to the accuracy loss distribution, local and direct connections are of vital importance for a neural network. It is rational because the deep learning method needs a compared invariant forward and backward feature flow path for propagation. We could also see the accuracy loss is larger along the diagonal to the high left of the matrix. It means that connections with shallow depth perform a more important role in conduct features/patterns than deeper connections. It is also rational because the shallower connections simultaneously mean the features that flow through such connections have not been extract to some level of abstraction. In FIG0, each column denotes how many connections are attached to this module. Contrast experiment suggests that: 1. The connections between shallow modules are more important than deeper and long-distance connections. 2. The local connections contribute a base test accuracy, and the long-distance connections will contribute more on increase accuracy by small steps based on the baseline accuracy. 3. The more connections a module has as input, the more robust the module will be when cutting off some of the connections. DISPLAYFORM0 In this paper, we firstly create locally dense and externally sparse structures by prefixing some dense modules and add sparse connections between them. Experiment demonstrate that evolving sparse connections could reach competitive on benchmark datasets. In order to give properties of these biologically plausible structures, we apply several sets of contrast experiments as shown in Experiment. By equally changing the input feature groups of each module during the whole training process, this strategy could alleviate the risk of the weights being trapped in local optimal point. Same to most of the related works, redundancy of each dense module is not'the larger the better', where the test accuracy will first increase within the growth rate increases, but finally drop while the growth is above some threshold. The combination of being dense and being sparse is an interesting area, and the internal dense and externally sparse structure also coincide with the modularity in human brain. We prove the feasibility of these structures and give a simple algorithm to search best connections. We also noticed that the connection matrix is not unique for reaching good performance. We will concentrate on revealing the relationship between these similar connection matrices and the representing features behind it. In this case, we may acquire state of the art performance on other datasets and tasks in our future work. Moreover, as these structures have various direct paths between input and output, separating a network into several small networks without any accuracy loss is also a promising topic. | In this paper, we explore an internal dense yet external sparse network structure of deep neural networks and analyze its key properties. | 1,204 | scitldr |
Statistical inference methods are fundamentally important in machine learning. Most state-of-the-art inference algorithms are variants of Markov chain Monte Carlo (MCMC) or variational inference (VI). However, both methods struggle with limitations in practice: MCMC methods can be computationally demanding; VI methods may have large bias. In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation. The proposed method can generate low-biased samples by increasing the length of MCMC simulation and optimising the MCMC hyper-parameters, which offers attractive balance between approximation bias and computational efficiency. We show that our method produces promising on popular benchmarks when compared to recent hybrid methods of MCMC and VI. Statistical inference methods in machine learning are dominated by two approaches: simulation and optimisation. Markov chain Monte Carlo (MCMC) is a well-known simulation-based method, which promises asymptotically unbiased samples from arbitrary distributions at the cost of expensive Markov simulations. Variational inference (VI) is a well-known method using optimisation, which fits a parametric approximation to the target distribution. VI is biased but offers a computationally efficient generation of approximate samples. There is a recent trend of hybrid methods of MCMC and VI to achieve a better balance between computational efficiency and bias. Hybrid methods often use MCMC or VI as an algorithmic component of the other. In particular, proposed a promising modified VI method that reduces approximation bias by using MCMC transition kernels. Another technique reduces the computational complexity of MCMC by initialising the Markov simulation from a pretrained variational approximation . proposed to improve MCMC using flexible non-linear transformations given by neural networks and gradientbased auto-tuning strategies. In this work, we propose a novel hybrid method, called ergodic inference (EI). EI improves over both MCMC and VI by tuning the hyper-parameters of a flexible finite-step MCMC chain so that its last state sampling distribution converges fast to a target distribution. EI optimises a tractable objective function which only requires to evaluate the logarithm of the unnormalized target density. Furthermore, unlike in traditional MCMC methods, the samples generated by EI from the last state of the MCMC chain are independent and have no correlations. EI offers an appealing option to balance computational complexity vs. bias on popular benchmarks in machine learning. Compared with previous hybrid methods, EI has following advantages: • EI's hyperparameter tuning produces sampling distributions with lower approximation bias. • The bias is guaranteed to decrease as the length of the MCMC chain increases. • By stopping gradient computations, EI has less computational cost than related baselines. We also state some disadvantages of our method: • The initial state distribution in EI's MCMC chain has to have higher entropy than the target. • The computational complexity per simulated sample of EI is in general higher than in VI. Monte Carlo (MC) statistical inference approximates expectations under a given distribution using simulated samples. Given a target distribution π, MC estimations of an expectation E π [f (x)] are defined as empirical average of the evaluation of f on samples from π. To generate samples from π, we assume that the unnormalized density function π * (x) can be easily computed. In a Bayesian setting we typically work with π * (x|y) given by the product of the prior p(x) and the likelihood p(y|x), where y denotes observed variables and x denotes the model parameters specifying p(y|x). Markov chain Monte Carlo (MCMC) casts inference as simulation of ergodic Markov chains that converge to the target π. The MCMC kernel M (x |x) is characterised by the detailed balance (DB) property: π(x)M (x |x) = π(x)M (x|x). Given an unnormalised target density π *, an MCMC kernel can be constructed in three steps: first, sample an auxiliary random variable r from an auxiliary distribution q φ1 with parameters φ 1; second, create a new candidate sample as (x, r) = f φ2 (x t−1, r), where f φ2 is a deterministic function with parameters φ 2; finally, accept the proposal as x t = x with probability p MH = min {0, π * (x)q φ1 (r)/[π * (x t−1)q φ1 (r)]}, otherwise duplicate the previous sample as x t = x t−1. The last step is well known in the literature as the Metropolis-Hastings (M-H) correction step and it in MCMC kernels that satisfy the DB condition. In the following, we denote the joint MCMC parameters (φ 1, φ 2) by φ. If f φ2 does not preserve volume, then it requires a Jacobian correction factor in the ratio in p M H. Hamiltonian Monte Carlo (HMC) is a successful MCMC method which has drawn great attention. A few recent works based on this method are;;. In HMC, the auxiliary distribution q φ1 is often chosen to be Gaussian with zero-mean and a constant diagonal covariance matrix specified by φ 1. The most common f φ2 in HMC is a numeric integrator called the leapfrog algorithm, which simulates Hamiltonian dynamics defined by log π * . The leapfrog integrator requires the gradient of log π * and a step size parameter given by φ 2. Given any initial state x 0, MCMC can generate asymptotically unbiased samples x 1:n. For this, MCMC iteratively simulates the next sample x t through the application of the MCMC transition kernel to the previous sample x t−1. It is well known in the literature that MCMC is computationally demanding . In particular, it is often necessary to run sufficiently long burn-in MCMC simulations to reduce simulation bias. Another drawback of MCMC is sample correlation, which increases the variance of the MC estimator . To avoid strong sample correlation, the common practice in MCMC to tune hyper-parameters manually using sample quality metrics like effective sample size, , which has been developed into automated gradient-based tuning strategies in recent work . Variational inference (VI) is a popular alternative to MCMC for generating approximate samples from π. Unlike MCMC reducing sample bias by long burn-in simulation, VI casts the sample bias reduction as an optimisation problem, where a parametric approximate sampling distribution P is fit to the target π. In particular, VI optimises the evidence lower bound (ELBO) given by where, also known as the entropy, must be tractable to compute. L ELBO (P T π *) is a lower bound on the log normalising constant log Z = log π * (x) dx. This bound is tight when P = π. therefore, the approximation bias in VI can be defined as the gap between L ELBO (P π *) and log Z, that is, where D KL (P π) denotes the Kullback-Leibler (KL) divergence. Variational approximations often belong to simple parametric families like the multivariate Gaussian distribution with diagonal covariance matrix. This in computationally efficient algorithms for bias reduction and sample generation, but may also produce highly biased samples in cases of over-simplified approximation that ignores correlation. Designing variational approximation to achieve low bias under the constraint of tractable entropy and efficient sampling procedures is possible using flexible distributions parameterised by neural networks (NNs) . However, how to design such NNs for VI is still a research challenge. The balance between computational efficiency and bias is a challenge at the heart of all inference methods. MCMC represents a family of simulation-based methods that guarantee low-bias samples at cost of expensive simulations; VI represents a family of optimisation-based methods that generate high-bias samples at a low computational cost. Many recent works seek a better balance between efficiency and bias by combining MCMC and VI. proposed to reduce variational bias by optimising an ELBO specified in terms of the tractable joint density of short MCMC chains. The idea seems initially promising, but the proposed ELBO becomes looser and looser as the chain grows longer. construct an alternative ELBO for HMC that still has problems since the auxiliary momentum variables are sampled only once at the beginning of the chain, which reduces the empirical performance of HMC. Inspired by contrastive divergence, proposed a novel variational objective function to optimise variational parameters by adding additional term that minimise the KL between a MCMC distribution and variational approximation to reduce variational bias. and proposed to replace expensive burn-in simulations in MCMC with samples from pre-trained variational approximations. This approach is effective at finding good initial proposal distributions. However, it does not offer a solution for tuning HMC parameters , which are critical for good empirical performance. Another line of research has focused on improving inference using flexible distributions, which are transformed from simple parametric distributions by non-linear non-volume preserving (NVP) functions. proposed to tune NVP parameterised MCMC w.r.t. a variant of the expected squared jumped distance (ESJD) loss proposed by. proposed a similar auto-tuning for NVP parameterised MCMC using an adversarial loss. Ergodic inference (EI) is motivated by the well-known convergence of MCMC chains : MCMC chains converge in terms of the total variation (TV) distance between the marginal distribution of the MCMC chain and the target π. Inspired by the convergence property of MCMC chains, we define an ergodic approximation P T to π with T MCMC steps as following. Given a parametric distribution P 0 with tractable density p 0 (x 0 ; φ 0) parameterlized by φ 0 and an MCMC kernel M (x |x; φ) constructed using the unnormalised target density π * and with MCMC hyperparameter φ, an ergodic approximation of π is the marginal distribution of the final state of an T -step MCMC chain initialized from P 0: We call φ 0 and φ the ergodic parameters of P T. Well known in MCMC literature like , the ergodic approximation p T converges to π after every MCMC transition and with sufficiently long chain p T is guaranteed to be arbitrarily close to π with arbitrary φ and φ 0. It is important to clarify that ergodic approximation is different from the modified variational methods like which only optimise the variational parameters φ 0, but the optimisation objective functions involve MCMC similation. In the following section, we show how EI can tune the ergodic parameters to minimise the bias of P T as an approximation to the target π with finite T. To reduce the KL divergence D KL (P T π), one could tune the burn-in parameter φ 0 and the MCMC parameter φ by minimizing equation 2. However, this is infeasible because we cannot analytically evaluate p T in equation 3. Instead, we exploit the convergence of ergodic Markov chains and propose to optimise an alternative objective as the following constrained optimisation problem: max where h is a hyperparameter that should be close to the entropy of the target, that is, h ≈ H(π). We call the objective in equation 4 the ergodic modified lower bound (EMLBO), denoted by L(φ 0, φ, π *). Note that the EMLBO is similar to L ELBO (P T π *), with the intractable entropy H(P T) replaced by the tractable L ELBO (P 0 π *). We now give some motivation for this constrained objective. First, we explain the inclusion of the term L ELBO (P 0 π *) in equation 4 and its connection to H(P T). If we maximised only the first term E p [log π * (x)] with respect to a fully flexible distribution P, the would be a point probability mass at the mode of the target π. This degenerate solution is avoided in VI by optimising the sum of E p [log π * (x)] and the entropy term H(P), which enforces P to move away from a point probability mass. However, H(P T) is intractable in ergodic approximation. Fotunately, we notice that maximising the term L ELBO (P 0 π *) = E p0 [log π * (x)] + H(P 0) has similar effect of maximising H(P T) for preventing P 0 from collapsing to the mode of π. It is easy to show that P T cannot be a delta unless H(P T) = −∞, which also implies L ELBO (P T π *) does not exist. Since the KL divergence D KL (P t π) never increases after each MCMC transition step , The constraint in equation 5 is necessary to eliminate the following pathology. If P 0 does not satisfy, we will favor P T to stay close to P 0 instead of making it converge to π faster. This is illustrated by the plot in the right part of Figure 2. To avoid this pathological case, note that It is interesting to compare the EMLBO with the objective function optimised by , that is, the ELBO given by where p(x 0:T −1 |x T) denotes the conditional density of the first T states of the MCMC chain given the last one x T and r(x 0:T −1 |x T) is an auxiliary variational distribution that approximates p(x 0:T −1 |x T). Note that the negative KL term in equation 6 will increase as T increases. This makes the ELBO in equation 6 become looser and looser as the chain length increases. In this case, the optimisation of equation 6 in an MCMC sampler that fits well the biased inverse model r(x 0:T −1 |x T) but whose marginal distribution for x T does not approximate π well. This limits the effectiveness of this method in chains with multiple MCMC transitions. By contrast, the EMLBO does not have this problem and its optimisation will produce a more and more accurate P T as T increases. EI combines the benefits of MCMC and VI and avoids their drawbacks, as shown in Table 1. In particular, the bias in EI is reduced by using longer chains, as in MCMC, and EI generates independent samples, as in VI. Futhermore, EI optimises an objective that directly quantifies the bias of the generated samples, as in VI. Methods for tuning MCMC do not satisfy the latter and optimise instead indirect proxies for mixing speed, e.g. expected squared jumped distance . Importantly, EI can use gradients to tune different MCMC parameters at each step of the chain, as suggested by. This gives EI an extra flexibility which existing MCMC methods do not have. Finally, EI is different from parallel-chain MCMC: while EI generates independent samples, parallel-chain MCMC draws correlated samples from several chains running in parallel. We now show how to maximise the ergodic objective using gradient-based optimisation. The gradient ∂ φ0,φ L(φ 0, φ, π *) is equal to the sum of two gradient terms. The first one ∂ φ0 L ELBO (P 0 π *) is affected by the constraint H(P 0) > h, while the second term is not. If we ignore the constraint, the first gradient term can be estimated by Monte Carlo using the reparameterization trick proposed in (D.P. ;): where f φ0 (·) is a deterministic function that maps the random variable i sampled from a simple distribution, e.g. a factorized standard Gaussian, into the random variable x i 0 sampled from p 0 (·; φ 0). To guarantee that our gradient-based optimiser yields a solution satisfying the constraint, we first initialize φ 0 so that H(P 0) > h and, afterwards, we force the gradient descent optimiser to leave φ 0 unchanged if H(P 0) is to get lower than h during the optimisation process. The Monte Carlo estimation of ∂ φ E p T [log π * (x)] can also be computed using the reparameterization trick. For this, the Metropolis-Hastings (M-H) correction step in the MCMC transitions, as described in Section 2.2, can be reformulated as applying the following transformation to x t−1: where is an indicator function that takes value one if p MH > u and zero otherwise. In Hamiltonian Monte Carlo (HMC), f φ is the leapfrog integrator of Hamiltonian dynamics with the leapfrog step size φ. We define the T -times composition of g φ, given in equation 8, as the transformation x T = g T φ (x 0, r 1:T ; u 1:T). Then, the second gradient term can be estimated by Monte Carlo as follows: T t=1 q(r t)Unif(u t ; 0, 1). Note that the gradient term equation 9 is correct under the assumption f φ is volume-preserving in the joint space of (x t−1, r t), otherwise additional gradient term of the Jacobian of f φ w.r.t. φ is required. However, it is not a concern for many popular MCMC kernels. For example, the leapfrog integrator in HMC f φ guarantees the preservation of volume as shown in . It is worth to mention that the indicator function in equation 8 is not continuous but differentiable almost everywhere. Therefore, the gradient in equation 9 can be computed conveniently using standard autodifferentiation tools. The gradient in equation 9 requires computing ∂ φ g T φ (x 0, r 1:T ; u 1:T), which can be done easily by using auto-differentiation and gradient backpropagation through the transfromations g φ (·, r t ; u t) with t = T,..., 1. However, backpropagation in deep compositions can be computationally demanding. We discovered a trick to accelerate the gradient computation by stopping the backpropagation of the gradient at the input x t−1 of g φ (x t−1, r t ; u t), for t = 1,..., T. Empirically this trick has almost no impact on the convergence speed of the ergodic approximation, as shown in Figure 2. 3.3 THE ENTROPY CONSTRAINT AND HYPERPARAMETER TUNING As mentioned previously, ignoring the constraint H(P 0) > H(π) may lead to pathological when optimising the ergodic objective. To illustrate this, we consider fitting an ergodic approximation given by a Hamilton Monte Carlo (HMC) transition kernel with T = 9. P 9 denotes the initial ergodic approximation before traing and P * 9 denotes the same approximation after training. The target distribution is a correlated bivariate Gaussian given by π = N (0, (2.0, 1.5; 1.5; 1.6)). Samples from this distribution are shown in plot (a) in Figure 1. We optimise different a separate HMC parameter φ t, as described in Section 2.2, for each HMC step t. We consider two initial distributions. The first one is P 0 = N (0, 3I) which satisfies the assumption H(P 0) > H(π). The second one is P 0 = N (0, I) with the entropy H(P 0) < H(π), which violates the assumption. In this latter case, we perform the unconstrained optimisation of equation 4. Plots (b) and (c) in Figure 1 show samples from P 9 and P * 9 for the valid P 0. In this first example, maximising the ergodic objective under equation 5 significantly accelerates the chain convergence as further shown by the left plot in Figure 2. Plots (d) and (e) in Figure 1 show samples from P 9 and P * 9 for the invalid initial distribution P 0. In the second example, E p0 [log π * (x)] is higher than E π [log π * (x)] and, consequently, maximising the unconstrained ergodic objective actually deteriorates the quality of the ing approximation. This is further illustrated by the right plot in Figure 2 which shows how the convergence of E pt [log π * (x)] to E π [log π * (x)] is significantly slowed down by the optimisation under the invalid P 0. Fortunately, it is straightforward to prevent this type of failure cases by appropriately tuning the scalar hyperparameter h in equation 5. A value of h that is too low may in higher bias of P T after optimisation as illustrated by the convergence of E pt [log π * (x)] in the blue and orange curves in Plot (b) in Figure 2. Furthermore, in many cases, estimating an upper bound on H(π) is feasible. For example, in Bayesian inference, the entropy of the prior distribution p(x) is often higher than the entropy of the posterior p(x|y). Therefore, the prior entropy can be used as a reference for tuning h. Figure 1: Histograms of samples from ergodic inference using HMC transition kernels. P 9 denotes the ergodic approximation before traing; P * 9 denotes the ergodic approximation after training. ] as a function of the length of the chain T using 10000 samples: Left: with the valid P 0 as H(P 0) > H(π); Right: with invalid P 0 as H(P 0) < H(π). SG training means the stop gradient is applied to the x from previous HMC step in equation 9. We first describe the general configuration of the ergodic inference method used in our experiments. Our ergodic approximation is constructed using HMC, one of the most successful MCMC methods in machine learning literature. We use T HMC transitions, each one involving 5 steps of the vanilla leapfrog integrator which was implemented following. The leapfrog pseudocode can be found in the appendix. In each HMC transition, the auxiliary variables are sampled from a zero-mean Gaussian distribution with diagonal covariance matrix. We tune the following HMC parameters: the variance of the auxiliary variables and the leapfrog step size, as mentioned in Section 2.2. We use and optimise a different value of the HMC parameters for each of the T HMC transitions considered. We call our ergodic inference method Hamiltonian ergodic inference (HEI). The burn-in model P 0 is factorized Gaussian. The initial entropy of P 0 is chosen to be the same as the entropy of the prior. The stocastic optimisation algorithm is Adam with TensorFlow implemtation and the optimiser hyperparameter setting is (β 1 = 0.9, β 2 = 0.999, = 10 −8). The initial HMC leapfrog step sizes are sampled uniformly between 0.01 and 0.025. Additional experiment on Bayesian neural networks is included in Appendix 6.3. We first compare Hamiltonian ergodic inference (HEI) with previous related methods on 6 synthetic bivariate benchmark distributions. Histograms of ground truth samples from each target distribution using rejection sampling are shown in Figure 3. The baselines considered include: 1) Hamiltonian variational inference (HVI) ; 2) generalized Hamiltonian Monte Carlo (GHMC) using an NVP parameterized HMC kernel and gradient-based auto-tuning of MCMC parameters w.r.t. sample correlation loss ; 3) Hamiltonian annealed importance sampling (HAIS) . It is worth to mention that we do not consider other hybrid inference methods like in our experiment, because these methods only combines MCMC simulation with VI but not optimise the parameters of MCMC kernel using the gradient-based approach like EI. HVI is the most similar method to HEI among all three baselines, because both HEI and HVI methods generate samples from the last state of MCMC chains and use gradient-based MCMC hyperparameter tuning to reduce bias. For a fair comparison between HVI and HEI, we consider the HMC chains with exactly the same setting in both methods: the initial state follows a standard Gaussian distribution and the length of HMC chain is T = 10. The key difference between HVI and HEI is the hyperparameter tuning objective, as mentioned in Section 3.1. We trained HVI for 1000 iterations and verified the ELBO converges to a (local) minimum (plots of the training ELBO values are included in Appendix 6.2). We trained HEI for 50 iterations. Following the setting of HAIS by, we used 1,000 intermediate distributions with 5 leapfrog steps per HMC transition and manually tuned the HMC parameters to have acceptance rate around 70%. GHMC 1 was run using 100 parallel chains with 5 leapfrog steps per GHMC transition, 100 burn-in steps and 1000 auto-tuned training iterations. The verification of the convergence of E p T [log π * (x)] to E π [log π * (x)] for HEI is shown in plot (a) of Figure 5. We generate 100,000 samples with each method and evaluate sample quality using two metrics: 1) the histogram of simulated samples for visual inspection; 2) the MC estimation of E π [log π * (x)]. Effective sample size (ESS) is a popular sample correlation based evaluation metric in recent MCMC literature . However, we do not consider ESS in this experiment, because GHMC is the only method among all methods generating correlated samples. Therefore, the ESS of GHMC is guaranteed to be lower than HVI and HEI. To generate ground truth samples from benchmark distributions, we use. The ing sample histograms of the ground truth using rejection sampling are shown in figures 3 and considered approximated sampling methods are shown in 4. Table 2 shows the ing estimates of −E π [log π * (x)] together with the wall-clock simulation time for generating 100,000 samples. The left part of Table 3 shows the training time of the MCMC parameter optimisation for all methods except HAIS, which does not support gradient-based HMC hyperparameter tuning. HEI is faster than HVI and GHMC. Note, however, that the acceleration of HEI over HSVI is due to the stopping gradient trick described in Section 3.2. The histograms and the estimates of −E π [log π * (x)] generated by HEI are consistent with the of the more expensive unbiased samplers GHMC and HAIS, which are close to the ground truth. By contrast, HVI exhibits a clear bias in all benchmarks. Regarding the sampling time, HVI and HEI simulate HMC chains with the same length and, consequently, perform similarly in this case while sample simulation from HAIS and GHMC is much more expensive. 6.00 3000 -83.57 50 HVI(T =1, 16LF, n h =800, ConvNet encoder) 6.00 360 -83.68 48 HVAE(T =1, 16LF, n h =500, ConvNet encoder) 6.00 360 -84.22 48 HEI(T =30, 5LF, n h =500, no neural net encoder) 1.65 54 -83.17 48 HEI(T =30, 5LF, n h =500, no neural net encoder) 3.00 100 -82.76 46 HEI(T =30, 5LF, n h =500, no neural net encoder) 6.00 200 -82.65 45 HEI(T =30, 5LF, n h =500, no neural net encoder) 12.00 400 -81.43 38 HEI(T =15, 5LF, n h =500, no neural net encoder) 8.00 540 -83.30 48 Table 4: Comparisons in terms of compuational efficiency and test log-likelihood in the training of deep generative models on the MNIST dataset. We implemented the deconvolutional decoder network in to test HVI. , the test likelihood is estimated using importence-weighted samples from the encoder network. In our experiment, we use Hamiltonian annealled importance sampling and report the effective sample size (ESS). We now evaluate HEI in the task of training deep generative models. MNIST is a standard benchmark problem in this case with 60,000 grey level 28 × 28 images of handwritten digits. For fair comparison with previous works, we use the 10,000 prebinarised MNIST test images 2 used by. The architecture of the generative model considered follows the deconvolutional network from. In particular, the unnormalised target p θ (x, y) consists of 32 dimensional latent variables x with Gaussian prior p(x) = N (0, I) and a deconvolutional network p θ (y|x) from top to bottom including a single fully-connected layer with 500 RELU hidden units, then three deconvolutional layers with 5 × 5 filters, feature maps, RELU activations and a logistic output layer. We consider a baseline given by a standard VAE with a factorised Gaussian approximate 4 Table 3: Left. The training time of MCMC parameter optimisation in seconds for 100 iterations for all candidate methods to produce the in Figure 4. The training time of HEI is lower than HVI because of the stop gradient trick mentioned in Section 3.2. We do not report the training time for HAIS, because HAIS requires manual tuning of MCMC hyperparameters which is not directly comparable to the gradient-based autotuning used by the other methods. Right. The training time in seconds per epoch for the experiments with deep generative models (DGM). posterior generated by an encoder network q(x|y) which mirrors the architecture of the decoder . The code for is not publicly available. Nevertheless, we reimplemented their convolutional VAE and were able to reproduce the marginal likelihood reported by , as shown in Table 4. This verifies that our implementation of the generation network is correct. We implemented HVI in using an auxiliary reverse model in the ELBO parameterized by a single hidden layer network with 640 hidden units and RELU activations. We also implemented the Hamiltonian variational encoder (HVAE) method , which is similar to HVI but without the reverse model. Unlike in the original HVAE, our implementation does not use tempering but still produces similar to those from. For the HEI encoder, we use T = 30 HMC steps, each with 5 leapfrog steps. The initial approximation P 0 is kept fixed to be the prior p(x). We optimise the decoder and the HEI encoder jointly using Adam. Table 4 shows the marginal test log-likelihood for HEI and the other methods, as estimated with 1,000 HAIS samples . , we also include the effective sample size (ESS) of HAIS samples for the purpose of verifying the reliability of the reported test log-likelihoods. Overall, HEI outperforms HVI, HVAE and the standard VAE in test log-likelihood when the training time of all methods is fixed to be 6 hours. HEI still produces significant gains when the training time is extended to 12 hours and, with only 1.6 hours of training, HEI can already outperform the convolutional VAE of with 6 hours of training. To verify the convergence of HEI, we show in plot (b) of Figure 5 estimates of.., 10 on five randomly chosen test images, where the ground truth E π [log π * (x)] is estimated by HAIS, after HMC hyper-parameter tuning in HEI (blue) and without hyper-parameter tuning in HEI (green), i.e. just using the initial hyper-parameter values. Plot (c) in Figure 5 shows similar , but using the maximum mean discrepancy (MMD) score to quantify the similarity of samples from p T to samples from π, where the latter ground truth samples are generated by HAIS. These plots suggests that shortening the HEI chain to T = 10 HMC steps will have a negligible effect on final simulation accuracy. Finally, the right part of Table 3 shows the training time of HEI with and without the stopping gradient trick. These resuls show that the former method is up to 5 times faster.: a: the targets are 2D benchmarks with the ground truth of E π [log π * (x)]; b: the target π is the VAE posterior p(x|y) each curve represents one random chosen test MNIST image y with the ground truth of E π [log π * (x)] estimated by HAIS using 100 samples; c: MMD score between HEI samples and HAIS samples. We have proposed Ergodic Inference (EI), a novel hybrid inference method that bridges MCMC and VI. EI a) reduces the approximation bias by increasing the number of MCMC steps, b) generates independent samples and c) tunes MCMC hyperparameters by optimising an objective function that directly quantifies the bias of the ing samples. The effectiveness of EI was verified on synthetic examples and on popular benchmarks for deep generative models. We have shown that we can generate samples much closer to a gold standard sampling method than similar hybrid inference methods and at a low computational cost. However, one disadvantage of EI is that it requires the entropy of the first MCMC step to be larger than the entropy of the target distribution. Here is the code for the vanilla leapfrog algorithm we used in HVI, HEI and HAIS. Algorithm 1: Leapfrog Input: x: state, r: momenta, φ 1: r variance, φ 2: step size, m: number of steps Result: x: new state, r: new momentum x = x; r = r; for t ← 1 to m dō r =r − 0.5φ 2 ∂ x U (x); x =x + φ 2 /φ 1r; r =r − 0.5φ 2 ∂ x U (x); end x =x; r =r; return x and r; The plots in Figure 6 show training loss (negative ELBO) of HVI and the training expected log likelihood E p T [log π * (x)] with T = 10 HMC steps with Adam with hyperparameter setting described in Section 4. It is clear that HVI is well trained but the approximation is biased, because E p T [log π * (x)] does not converge to the true loss (the red line on the right plots). In comparison, in Figure 6 (Left) in our paper, E p T [log π * (x)] of HEI converges to the ground true by optimising our ergodic loss. In this additional experiment we approximate the posterior distribution of Bayesian neural networks with standard Gaussian priors. We consider four UCI datasets and compare HEI with the stochastic gradient Hamilton Monte Carlo (SGHMC) method from. The networks used in this experiment have 50 hidden layers and 1 real valued output unit, as stated in. The HEI chain contains 50 HMC transformation with 3 Leapfrog steps each. The initial proposal distribution P 0 is a factorised Gaussian distribution with mean values obtained by running standard mean-field VI using Adam for 200 iterations. We do not use in P 0 the variance values returned by VI because these are unlikely to in higher entropy than the exact posterior since VI tends to understimate uncertainty. Instead, we choose the marginal variances to be n −0.5 where n is the number of inputs to the neural network layer for the weight. To reduce computational cost, we use in this case stochastic gradients in the leapfrog integrator. For this, we split the training data into 19 mini-batches and only use one random sampled mini-batch for computing the gradient in each leapfrog iteration. We train our HEI for 10 epochs and the stationary distribution is chosen as approximate posterior on a random sampled mini-batch. The ing test log-likelihoods are shown in Table 5. Overall, HEI produce significantly better than SGHMC. We also show in the right plot of Figure 7 estimates of E pt [log p(x, y)] for t = 1,..., 50 after HMC hyper-parameter tuning and without hyper-parameter tuning. | In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation. | 1,205 | scitldr |
We show that information about whether a neural network's output will be correct or incorrect is present in the outputs of the network's intermediate layers. To demonstrate this effect, we train a new "meta" network to predict from either the final output of the underlying "base" network or the output of one of the base network's intermediate layers whether the base network will be correct or incorrect for a particular input. We find that, over a wide range of tasks and base networks, the meta network can achieve accuracies ranging from 65% - 85% in making this determination. What do neural networks know and where do they know it? At what stage of a network's processing does a "decision" get made and are there reliable markers of a correct or incorrect decision either in the output or during a network's operation at one of its intermediate layers? To begin this investigation, we ask where in a neural network's operation it becomes possible to determine whether the network might be correct or incorrect in its output for a particular input. We feed a second, "meta" network the outputs of either an intermediate or final layer of the first, "base", network and train the meta network to predict whether the base network will be correct for an individual input. We call the second network a meta or metacognitive network because humans and other animals are known to make so-called metacognitive judgments to assess their confidence in the correctness of their beliefs or actions BID4.We find that the meta network is able to predict whether a base network will be correct or incorrect on previously unseen inputs with up to 69% accuracy for base networks 1 Computer Science Department, Columbia University, New York, New York, USA 2 Mechanical Engineering Department and Data Science Institute, Columbia University, New York, New York, USA. Correspondence to: Chad DeChant <[email protected]>.Identifying and Understanding Deep Learning Phenomena Workshop at the International Conference on Machine Learning 2019 FIG0. Meta network pipeline: the Meta network receives as input the output of one of the base network's layers for a particular input and predicts whether the base network will be correct.classifying ImageNet images and 85% accuracy for a base network classifying CIFAR 10 images. As these two examples suggest, the accuracy of the meta network is higher for simpler underlying tasks in our experiments. The usefulness of the layers' outputs for predicting the accuracy of the network is lowest at the earliest layers in the network and increases to be highest either at the last hidden layer or, in most cases, the final output. Meta networks trained on different layers' outputs have significant but not complete overlap in which examples they are able to correctly predict will go on to be accurately or inaccurately classified, suggesting that there is slightly different information at each level which can be used to make assessments of accuracy. Our approach has two main stages. First, we run example images or text passages through a pretrained base network and save the final and intermediate outputs of that base network. We use PyTorch and save intermediate layer outputs using its hook feature BID5. For each example for which we save the output of intermediate stages, the We therefore train the meta network using examples from the training set of the base network. Given the relatively high accuracy of the base models in our experiments, it would easy for the meta classifier to "cheat" by predicting that the base network is always correct. To prevent this, we balance the classes at training time and choose our models based on the best and most balanced accuracy on the validation set. During training we define this combination of highest accuracy and balance to be the geometric mean of the meta network's accuracy on "base correct" (C) and "base incorrect" (I) classes minus the absolute value of their difference: DISPLAYFORM0 All numbers reported here are from a held out test set of inputs previously unseen by both the base and meta networks. To determine how general and widely occurring is the phenomenon we are investigating, we train and test meta networks on a variety of tasks and base networks. Most of our testing is done on base networks which are trained for image classification tasks; we use six networks available in the PyTorch library. To assess the accuracy of networks trained on ImageNet BID7, we use AlexNet, Resnet 18, VGG 16, DenseNet 161, and ResNet 152 networks. For these networks we save and use for training the network's final outputs, the output of the last hidden layer (referred to as "last" in the tables"), and in some cases the output of the penultimate hidden layer ("penultimate" in the tables).For CIFAR 100 BID3 we use and train a VGG 16 network; for CIFAR 10 we train and use a VGG 19 network. For these models we train meta networks on the final output and the output of the last hidden layer, the penultimate hidden layer, the last convolutional layer, a middle convolutional layer (the fifth in VGG16, the eighth in VGG19), and the first convolutional layer. Our base networks for CIFAR 100 and CIFAR 10 had an accuracy of 71.5% and 91.1% on their respective test sets. To test whether intermediate layers can be predictive of accuracy on a non-vision task, we use a Bi-Directional Attention Flow (BiDAF) model BID8 pretrained on the Stanford Question Answering Dataset (SQuAD) version 1.1 BID6. The SQuAD task gives a base network a context passage and a question, and requires the network to output where in the passage the answer to the question starts and ends. We run each example passage and question pair in the SQuAD 1.1 dataset through a pretrained model available in the AllenNLP library BID1 ). This base model has an exact match accuracy (where both the start and end locations of the answer predicted by the model exactly match the ground truth) of 68.03%. Further details of the BiDAF model can be found in FIG2 in the Appendix. 3.1. Images: ImageNet, CIFAR 100, and CIFAR 10Accuracy numbers for the meta networks trained on various models classifying ImageNet images are found in TAB0.For any particular layer, the meta networks display balanced accuracies when the base network as correct and incorrect, ranging from 63% to 70%.Results for meta networks for a VGG16 network trained on CIFAR 100 can be found in TAB1; for meta networks for a VGG19 model trained on CIFAR 10 are in Table 3.There is a clear pattern: when a meta network is trained on the outputs of the first and middle convolutional layers, its accuracy is at best only somewhat better than chance (for CIFAR 100) and at worst no better than chance on average and very unbalanced (for CIFAR 10). Trained on the final outputs, a meta network reaches 77% accuracy averaged between the two classes for the CIFAR 100 base network and 85% for the CIFAR 10 network. Between these two extremes is a gradual increase in accuracy as meta networks are trained on later stages of the base network. Accuracy numbers for the meta networks trained on intermediate and final outputs of a BiDAF model for the SQuAD 1.1 dataset are found in TAB2. The output layer is the concatenation of the output prediction of the start and end locations of the answer. The pattern seen in meta network accuracies for networks trained on vision tasks is not evident here: the highest accuracy is not reached when the meta network is trained on the final outputs. Instead, the best meta network accuracy is found when classifying the output of the Modeling layer composed of Long Short Term Memory (LSTM) units just before the final output layer of the BiDAF model. We have seen that meta networks trained on the last few layers of network achieve similar accuracies. A natural question to ask about the use of many layers for a meta network, then, is whether the meta networks are getting nearly all of the same examples right even when looking at different layers' outputs. We found that while there was considerable overlap, a significant percentage (approximately 20%, depending on which layers we compare) of examples were correctly classified by a meta network looking at one layer of a VGG16 network, but not by a different meta network looking at another layer. In other words, it was not the case that the meta networks' verdicts for each example were the same no matter which layer was considered, suggesting that there might be different information about the accuracy of the base network present at different layers. TAB3 shows the overlaps of a meta network's verdicts for the outputs of the VGG16 network trained on CIFAR 100.A similar was evident in the meta classification when trained on the BiDAF model for the SQuAD dataset. The overlap between correct meta network predictions of accuracy was 73.2% on examples for which the base network was correct and 75.7% for those examples which were originally incorrect coming out of the base network. It is clear that the meta networks are able to learn something about the intermediate and final outputs which are indicative of the networks' accuracy. Just what that is and whether it can be useful in improving or interpreting the networks is as yet unclear. It is difficult to estimate the accuracy of a neural network at runtime. On tasks that involve a choice between discrete options, the value of the highest output after it is put through a softmax is often considered to represent the network's confidence or estimate of the probability of the corresponding class's being correct. However, it is not clear that this interpretation is warranted. Recent work has shown that these outputs are not reliable BID2. It is interesting, then, to consider whether when a meta network is trained on the final outputs it learns to simply classify those outputs in which the predicted class has very high values as correct and those with relatively low values as incorrect. This would correspond to the general intuition that high values for predicted classes indicate meaningfully high confidence. Figure 2 graphically illustrates the outputs of a ResNet18 network trained on ImageNet, with sample outputs of the highest confidence class arrayed along the x axis (a similar chart for outputs of the BiDAF model is found in the Appendix). It shows that while there is certainly a correlation between a base network's accuracy and the value of the output corresponding to the highest predicted class, it is not a simple or completely reliable one. On average, the base network indeed tends to be more confident in its correct answers than its wrong answers, and the set of examples the meta network is correct on shows this pattern clearly while the examples the meta network gets wrong show less distinct base "confidence" numbers. However, it is apparent that the base network is often very "confident" of a wrong answer and not confident of a correct answer. From inspecting the plots it is clear that the meta network is not judging the net- FIG1. Examples of maximum values (arrayed along the x axis) output by a Resnet18 network on ImageNet after the softmax function. The meta network is correct in both cases in the top row and incorrect in the bottom row; the Resnet base classifier is correct on the left and incorrect on the right in both rows. The mean value in each category is given. This shows that the meta network does not learn to simply classify the output based on the value of the class prediction, which is often interpreted as the network's'confidence'.work's output simply by learning a threshold "confidence" level above which it predicts it will be correct and below which it predicts it will be incorrect. This is evident by the large number of incorrect high "confidence" outputs of the base network which the meta network accurately marks as incorrect, as well as the correct low "confidence" outputs which the meta networks finds correct. Further study will be required to better understand what features the meta network has learned to look for to measure accuracy. Neural networks designed for a classification-type task are generally trained to give an answer, not to also indicate whether they are likely to be right or wrong. While there has has certainly been work to address this, notably that involving Bayesian networks BID0, the present work and its future extensions may point in other fruitful directions for characterizing a network's likely accuracy at runtime. There may also be interesting connections to work studying neural networks from an information theoretic perspective BID9. We train meta networks to judge whether a base network is correct or incorrect on particular inputs by feeding the meta network outputs, final or intermediate, from the base network. The blue arrows show which outputs of the base Bi-Directional Attention Flow model the meta network examines when classifying the base network's output as accurate or inaccurate. Image adapted from BID8 | Information about whether a neural network's output will be correct or incorrect is somewhat present in the outputs of the network's intermediate layers. | 1,206 | scitldr |
We develop a new algorithm for imitation learning from a single expert demonstration. In contrast to many previous one-shot imitation learning approaches, our algorithm does not assume access to more than one expert demonstration during the training phase. Instead, we leverage an exploration policy to acquire unsupervised trajectories, which are then used to train both an encoder and a context-aware imitation policy. The optimization procedures for the encoder, imitation learner, and exploration policy are all tightly linked. This linking creates a feedback loop wherein the exploration policy collects new demonstrations that challenge the imitation learner, while the encoder attempts to help the imitation policy to the best of its abilities. We evaluate our algorithm on 6 MujoCo robotics tasks. | Unsupervised self-imitation algorithm capable of inference from a single expert demonstration. | 1,207 | scitldr |
Significant work has been dedicated to developing methods for communicating reasons for decision-making within au- tomated scheduling systems to human users. However, much less focus has been placed on communicating reasons for why scheduling systems are unable to arrive at a feasible solution when over-constrained. We investigate this problem in the context of task scheduling. We introduce the agent resource-constrained project scheduling problem (ARCPSP), an ex- tension of the resource-constrained project scheduling problem which includes a conception of agents that execute tasks in parallel. We outline a generic framework, based on efficiently enumerating minimal unsatisfiable sets (MUS) and maximal satisfiable sets (MSS), to produce small descriptions of the source of infeasibility. These descriptions are supple- mented with potential relaxations that would fix the infeasibility found within the problem instance. We illustrate how this method may be applied to the ARCPSP and demonstrate how to generate different types of explanations for an over- constrained instance of the ARCPSP. In many real-world applications, human users in charge of developing plans and making decisions are aided by automated planning and scheduling systems. For example, NASA mission planning makes use of a large team of human planners that use various automated scheduling systems in order to construct day-to-day as well as long-term plans for crew members. A primary function of these automated systems is generating different types of plans and schedules while ensuring that various constraints do not conflict. When plans are ultimately constructed by human planners for a human crew, it is essential for both the planners, and the crew executing the plans, to understand how and why certain scheduling decisions were made by automated tools. In general, when the primary function of such constraint satisfaction and optimization tools is to support human decision-making, it is necessary for the automated systems to be transparent in how they arrive at certain outputs. Significant work has been dedicated to generating humanunderstandable explanations for why certain automated planning decisions were made BID10 ).However, little work has been done in generating reasons for why plans or schedules cannot be generated under certain specifications. Human users interacting with such constraint satisfaction or optimization tools are bound to run into configurations for which no feasible solution exists. Fixing infeasible configurations is a challenging task for the human user if they are unable to understand why the solver arrives at an unsatisfiable . While various partial constraint satisfaction tools exist for solving such over-constrained problems BID4, solutions employing these tools have significant limitations that make them less applicable in certain real-life scenarios. Most of these methods employ constraint hierarchies to determine which constraints should be violated in order to satisfy more important ones. However, in complicated planning or scheduling applications involving multiple human agents, constructing such a hierarchy is often impractical. Instead, if reasons for infeasibility can be properly conveyed back to the human user, they can make high-level decisions to solve infeasibility in any way they see fit. In this paper, we provide a framework for iteratively generating human-understandable explanations of infeasibility for a specific class of scheduling problems. These explanations manifest themselves as minimal sets of specifications (or constraints) that are responsible for causing infeasibility, coupled with suggestions for relaxations through which feasibility could be achieved. The method proposed in this paper allows users to enumerate over a series of explanations for infeasible instances of problems at varying levels of abstraction. For example, raw explanations of relevant low-level constraints may be directly output or a causal link may be established back to higher level descriptions of the problem to understand what specifications were responsible for the feasibility issue. This system also allows directed questions about feasibility to be asked, such as "why can task A not be scheduled after task B?"A strategy for iteratively generating minimal unsatisfiable sets (MUS) and maximal satisfiable sets (MSS) forms the basis for interpreting the infeasibility of the problem. Existing methods such as QuickXplain BID5 ) focus on generating a single most preferable explanation of infeasibility. Likewise, BID1 aims to generate a single explanation in the context of optimization without attempting to achieve minimality. However, overconstrained problems may contain several infeasibility issues which cannot be solved by changing only a single part of the problem. So, because a single MUS only provides indication of a single feasibility issue, we aim to enumerate several sets of MUS to highlight multiple feasibility issues found within the problem instance. Therefore, the proposed enumeration strategy is based on MARCO BID8 ), a flexible algorithm for generating MUSes and MSSes in succession. Motivated by the domain of space mission scheduling, we introduce and investigate the agent resource-constrained project scheduling problem (ARCPSP), an extension of the resource-constrained project scheduling problem (RCPSP) that incorporates the delegation of tasks to differing agents. This problem cannot be framed as an instance of the RCPSP because it deals with the case of asymmetric agents in which certain tasks may only be executed by a subset of the agents. This problem is meant to model applications in which efficient scheduling for teams of differing agents is critical. While we only explicitly investigate this problem, the generality of the approach outlined in this paper would allow the methodology to be adapted for different types of constraint satisfaction and optimization tools as well as different types of planning and scheduling problems. The main contributions of this paper are the following: firstly, we provide a formal definition of the agent resourceconstrained project scheduling problem (ARCPSP) in Section 3. Then in Section 4 we outline a difference logic encoding of the ARCPSP which is used to check feasibility of problem instances. The framework for generating humanunderstandable explanations of infeasibility for instances of the ARCPSP is described in Section 5. Finally, we provide an overview of the trade-off between interpretability and expressibility of different types of explanations and conclude by discussing how these ideas can be extended. In this section, we introduce relevant information and definitions used throughout the paper. These concepts will set the stage for formulating the ARCPSP in terms of satisfiability modulo theory and using minimal unsatisfiable sets and maximal satisfiable sets to generate explanations of infeasibility. Let X be a set of variables and clauses C 1,..., C n be formulas representing constraints over X. Consider a formula of the form DISPLAYFORM0 We say the formula ϕ is satisfiable if there exists some assignment to the variables in X which makes ϕ evaluate to TRUE. Otherwise, it is unsatisfiable. Note that if ϕ takes the form of equation FORMULA0, as it does throughout this paper, every clause C i must be TRUE in order for ϕ to evaluate to TRUE. To implement the temporal constraints within a schedule, the clauses C i are taken from the theory of difference logic (DL), which makes deciding ϕ a satisfiability modulo theory (SMT) problem. To check satisfiability of problem instances, we use the Microsoft Z3 SMT solver (De Moura and Bjørner 2008). As will be discussed in Section 4, the agent resourceconstrained project scheduling problem (ARCPSP) can be encoded in difference logic (DL), a fragment of linear real arithmetic (LRA). The numerical components of DL are solvable in polynomial time BID2 using graph-based procedures based on an incremental BellmanFord algorithm. In general, decidability for DL using these methods is more efficient than the simplex-based methods used to decide LRA. Under DL, atoms are restricted to the form DISPLAYFORM0 However, we can rewrite the following atoms in difference form: DISPLAYFORM1 Bounds x ≤ k can also be incorporated by writing them as x − x 0 ≤ k where x 0 is a special variable that is later set to zero. Definition 1. A minimal unsatisfiable set (MUS) of a set C of constraints is a subset M ⊆ C such that M is unsatisfiable and every proper subset M ⊂ M is satisfiable. A maximal satisfiable set (MSS) of a set C of constraints is a subset M ⊆ C such that M is satisfiable and every proper superset M, with C ⊇ M ⊃ M, is unsatisfiable. A minimal correction set (MCS) of a set C of constraints is the complement of some maximal satisfiable set of C, and can be understood as a minimal set of constraints which need to be removed from C in order to make it satisfiable. It is important to note that MUSes, MSSes, and MCSes are only locally maximal (or minimal), and are different from concepts of globally optimal subsets. MUSes can be understood as isolated, infeasible subsets of the constraints. Their primary characteristic is that removing any single constraint would make the set satisfiable. However, this does not necessarily guarantee the feasibility of the entire set of constraints because there might be many disjoint MUSes within the set. In order to make the entire set feasible (satisfiable), a hitting set of the MUSes must be removed. Every MCS is precisely one combination of such a hitting set. Definition 2. A of a set C of constraints is a subset B ⊆ C of hard constraints, which must be necessarily satisfied. In the context of scheduling problems, s typically include constraints that ensure that the outcome of the schedule is logical, including conditions such as tasks not overlapping and resource constraints not being exceeded. We denote everything outside of the M \ B as the foreground. Hence, the and foreground partition the set C of constraints. A minimal conflict of an over-constrained set C of constraints with respect to a B is then a subset of the foreground M ⊂ C \ B such that M ∪ B is unsatisfiable and, for any superset M ⊃ M, M ∪B is satisfiable. A minimal relaxation of an over-constrained set C of constraints with respect to a B is a subset of the foreground M ⊂ C \ B such that (C \ M) ∪ B is satisfiable and, for any superset M ⊃ M, (C \ M) ∪ B is unsatisfiable. Then an explanation is a sequence of minimal conflicts and minimal relaxations for a problem instance. The definitions of minimal conflicts and minimal relaxations mirror the concepts of MUSes and MCSes, respectively, while incorporating a of constraints which cannot be modified. A is necessary for specifying hard constraints which cannot be relaxed or modified. This way we can prevent certain constraint from consideration for conflicts or relaxations. A also allows the generation of explanations concerning different aspects of a scheduling problem instance, a concept which will be explored later in the paper. The problem that we formulate is an extension of the resource-constrained project scheduling problem (RCPSP). Loosely, the RCPSP considers nonpreemptive, precedenceconstrained tasks of known durations that are constrained by reusable resource requirements (i.e. resources that are returned after a task stops using them). The agent resourceconstrained project scheduling problem extends the RCPSP to include a set number of agents that execute the tasks in parallel, subject to certain compatibility constraints. Additionally, while the RCPSP generally cares about optimizing the total makespan of the schedule, we instead introduce a set start and end time for each scheduling instance and only focus on its feasibility (i.e. whether or not all tasks can be completed within this specified time frame). An instance of an agent resource-constrained project scheduling problem (ARCPSP) is defined by a tuple (M, J, s, p, U, E, R, B, b), where the components are defined as follows. DISPLAYFORM0 are the allowable time ranges in which the tasks should be executed, where DISPLAYFORM1 -p = [p 1, . . ., p n] is a vector of the durations of tasks J, where p i is the duration of task J i.-U = {U 1, U 2, · · ·, U n} is the compatibility set for the tasks. Each task J i can be completed by a subset U i ⊆ M of agent.-E ⊆ J × J is a set of precedence relations. (J i, J j) ∈ E if and only if task J i must terminate before task J j begins. Precedence relations must be defined in a consistent way (by respecting the transitive property).-R = {R 1, R 2, · · ·, R q} is a set of reusable resources.-B ∈ N q represents the total availability of the resources R. The tasks that share resource B i are mutually exclusive if B i = 1.-b ∈ N n×q represents the resource demands of tasks where task J i requires b i,j units of resource R j during its execution. The total demand of resource R j at anytime cannot exceed its total availability B j. DISPLAYFORM2 is a solution to an instance of an ARCPSP, where S i and A i are the start time and the assigned agent of task J i, respectively. A schedule is feasible if it satisfies the following constraints:• No agent has overlapping tasks, DISPLAYFORM3 • Every task falls within its allowable time frame DISPLAYFORM4 • The activities are assigned to compatible agents DISPLAYFORM5 • The precedence relations are met DISPLAYFORM6 • The resource constraints are satisfied, let J t = {J i ∈ J | S i ≤ t < S i + p i} represent the set of tasks being executed at time t, then DISPLAYFORM7 The constraints of the ARCPSP can be formulated in terms difference logic in the following way. Constraint can be rewritten as DISPLAYFORM0 Constraint can be rewritten as DISPLAYFORM1 and constraint can be rewritten as DISPLAYFORM2 By representing the agents as integers M i ∈ N, constraint can be rewritten as Encoding the resource constraints is slightly more challenging. For mutually exclusive constraints (B i = 1), the tasks that share a resource can simply be encoded as not being allowed to be executed at the same time. That is, constraint FORMULA10 can be rewritten as DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 We can generalize this idea to non-mutually exclusive constraints through the concept of minimal forbidden sets. First introduced by BID9 ), forbidden sets are unsatisfiable sets with respect to resource constraints only. They represent the sets of tasks that cannot be simultaneously scheduled because they would otherwise exceed the availability of some resource constraint. The essential feature of a minimal forbidden set is that a single task can be rescheduled to another time to make the set respect the resource constraint. Therefore, given a minimal forbidden set J *, we would like to encode a constraint requiring that they cannot all be executing at the same time DISPLAYFORM6 where J t = {J i ∈ J | S i ≤ t < S i + p i} represents the set of tasks being executed at time t. This encoding is similar to methods which encode the RCPSP in terms of linear arithmetic BID0, but this requires discretizing time and incurs a cumbersome number of constraints if there are a large number of time-points. Moreover, equation FORMULA0 cannot easily be formulated in terms of difference logic. Instead, we can reformulate the constraint as there being at least two tasks in J * that do not overlap DISPLAYFORM7 for every minimal forbidden set J *. This constraint is logically equivalent to requiring that at any time-point, there be at least one task in each minimal forbidden set that is not being executed. Constraining all of the minimal forbidden sets, a subset of all of the forbidden sets, is sufficient to prevent resource conflicts because every forbidden set is a superset of some minimal forbidden set. Algorithms exist for computing all minimal forbidden sets (Stork and Uetz 2005) so we will not discuss such a computation here. Encoding resource constraints as forbidden sets is efficient in the context of the ARCPSP in comparison to other methods, such as equation FORMULA0. This is primarily because of the computational advantage achieved by difference logic over other theories such as linear real arithmetic and the encoding not requiring a discretization of time. Representing resource constraints as minimal forbidden sets also provides an explicit representation of resource constraints in terms of MUSes. If a resource constraint appears in an explanation, we can represent it as the minimal forbidden set which is being violated. For example, if a resource constraint constitutes a component of some MUS, it will be represented as some subset {A, B, C} of tasks, meaning that tasks A, B, and C cannot be scheduled at the same time because they would violate a resource constraint. Example 1. Scheduling astronauts aboard the ISS We model the problem of scheduling astronauts aboard the International Space Station (ISS) as an instance of the AR-CPSP for which the elements of M = {M 1, M 2, · · ·, M m} represent the crew members. We consider the case of m = 6 astronauts. The bounds on task execution are from minute 120 to 840; the sleeping related tasks outside of this bound are fixed so are not a part of the problem instance. The availability of the power resource, is 1000 units. The tasks are divided into different categories: -There are 6 laboratory tasks, each of duration 120 with allowable time ranges of, the entire work day. However, they have precedence constraints {(J Li, J Li+1) | 1 ≤ i < 6}, each laboratory task must be completed before the next one begins. Each laboratory task can be completed by any astronaut, so the compatibility set is all of the astronauts. Each laboratory task also has a power requirement of 400 units. -There are m weights and m treadmill tasks, one for each agent, each of duration 75. They have allowable time ranges of. Each weight and treadmill task must be completed by a unique astronaut so their compatibility sets can be specified by letting the ith task only be completed by astronaut M i. However, there is only one set of weights and treadmill equipment, so we can define reusable resources R W and R T both with availability B W, B T = 1, respectively. Each treadmill task also has a power requirement of 200 units. -There are m meal tasks. They have allowable time ranges of. Similar to the exercise tasks, each one must be completed by a unique astronaut so their compatibility sets can be specified by letting the ith meal task only be completed by astronaut M i. -Several miscellaneous tasks, deploy cubesat, collect sample, hardware gather, and eva suit test with durations 60, 60, 120, and 120 respectively, do not fall into any particular group. These tasks have an allowable time range of and can all be completed by any astronaut. They require 400, 500, 400, 400 units of power, respectively. A feasible schedule for this instance of the ARCPSP is visualized in FIG0. We modify the previous example slightly to produce an unsatisfiable problem instance. If we change the duration of each laboratory task from 120 to 121, we get an overconstrained system of constraints for which no feasible schedule exists. We'll use this running example to produce explanations in the next several sections. The proposed strategy for enumerating subsets is based on MARCO BID8 ) over other systems such as CAMUS BID7 because outputting at least some MUSes quickly is more important than explicitly generating every MUS. MARCO relies on four main functions to explore the feasibility of a power set lattice of constraints which we outline here in the language of conflicts and relaxations. -BlockUp -Called whenever a conflict is found. Marks every supersets of the current set, preventing it from being explored later on. -BlockDown -Called whenever a relaxation is found. Marks every subsets of the current set, preventing it from being explored later on. -Grow -If the current set is satisfiable, adds constraints to the current set until adding any other constraint would make it unsatisfiable. -Shrink -If the current set is unsatisfiable, removes constraints from the current set until removing any other constraint would make it satisfiable. A power set lattice of Boolean variables representing each constraint in the foreground is maintained throughout the execution of the algorithm. First, a random subset of constraints is constructed by choosing a point in the Boolean lattice. Then the SAT solver checks whether the set is SAT or UNSAT. If the ing assignment is SAT (feasible), then constraints are iteratively added to the current set until a minimal relaxation is found. If the initial set is instead DISPLAYFORM0 The power set lattice of {C 1, C 3, C 4} with C 2 and corresponding relaxation (green) and conflict (red).UNSAT (infeasible), constraints are iteratively removed until a minimal conflict is found. After a minimal conflict is found, BlockUp is called, removing any supersets of the minimal conflict from consideration in the lattice. We can do this because any superset of a conflict must be unsatisfiable because it contains the conflict. The opposite direction also applies, any subset, after a minimal relaxation has been removed, must be satisfiable so we can rule them out of consideration. Hence, after a minimal relaxation is found, we BlockDown, removing any subsets from consideration in the lattice. Then a new satisfying assignment is pulled from the remaining sections of the boolean lattice and new conflicts and relaxations are generated until the entire lattice is blocked or the algorithm times out. Consider the unsatisfiable conjunction of the following set of clauses, DISPLAYFORM0 with B = {C 2}. We'll use this example to step through an execution of the subset enumeration algorithm. A visualization of the associated Boolean lattice is shown in Figure 2. A random initial seed has us start with clause {C 1, C 3} and the SAT solver says it's UNSAT. We then Shrink and remove C 3 from the set and the SAT solver says {C 1} is still UNSAT and minimal. We then output this minimal conflict {C 1}. Because this set is now minimal, we can BlockUp, removing supersets {C 1}, {C 1, C 3}, {C 1, C 4}, {C 1, C 3, C 4} from consideration. We then choose a new seed, let's say {C 3}. The ing set is SAT so we Grow to the set {C 3, C 4} which is then SAT and maximal so we can BlockDown subsets {C 4}, {C 3}, {∅} and output {C 1}, the complement of {C 3, C 4}, as a relaxation. The lattice is then entirely blocked off so we terminate with the single conflict and relaxation. Figure 2 shows the power set lattice of the foreground along with the corresponding relaxation (green) and conflict (red). The standard for the ARCPSP involves constraints to ensure that the ing schedule is logical. This way, the foreground only involves constraints which can be altered by parameters that are controlled by the user.-New variables S 0 and S m+1 are introduced that mark the beginning and end of the schedule bounds. These variables prevents the subset enumeration from relaxing the temporal constraints of tasks outside of the feasible bounds of the schedule DISPLAYFORM0 -Each task is assigned to some existing agent. Without this constraint, the solver could assign tasks to a nonexistent agent to solve conflicts DISPLAYFORM1 -No agent has overlapping activities. Disallowing the solver from consider cases in which tasks can overlap prevents it from generating meaningless . This condition is precisely constraint.We will refer to this set of constraints throughout the following section. The method of generating minimal conflicts and relaxations can be applied to both sets of individual constraints and, by modifying the , sets of tasks. In this section, we investigate the first case, which we call constraint explanations. Following the strategy in the beginning of Section 5, we enumerate only over the constraints that are in the foreground, as specified in Section 5.1. That is, we consider constraints timeframe, compatibility, precedence, and resource referring to equations,,, and, respectively, for each task in the schedule. The rest of the constraints are implied as a part of the because they only correspond to imposing a logical structure on the solution, not constraining the parameters of the schedule. Hence, the Boolean lattice which is enumerated over contains only these four types of constraints. The outputs for constraint explanations are formatted as a tuple of the relevant tasks followed by a constraint type. For example, (LAB 0, LAB 1) precedence refers to the precedence constraint between the first and second laboratory tasks. When a constraint is only relevant to a single task, we write the task followed by the constraint type (e.g., MEAL 0 compatible refers to the agent compatibility constraint for the first meal task).The full constraint explanation for Example 2 includes 14 minimal relaxations and 3 minimal conflicts. FIG2 shows a representative part of this full constraint explanation. The omitted conflicts and relaxations are identical in structure to the ones shown in FIG2 and give practically redundant information. Computing the set of minimal forbidden sets took 2.11 seconds and calculating the full explanation took 1.57 seconds. In this example, relaxations provide the user with minimal ways in which constraints could be changed to fix the schedule. Meanwhile, conflicts give insight into why infeasibility is occurring. For example, Relex 1 indicates that removing the precedence between laboratory task 3 and 4 would make the schedule feasible. However, Confl 1 indicates that the precedence between all of the laboratory tasks does not fit in the schedule. A user could use this latter information to alter the original parameters of the schedule rather than having to void an entire task or constraint. One possible solution could be extending the length of the schedule or shortening the length of some of the laboratory tasks, an option which is not revealed by relaxations alone. This formulation of explainability in terms of conflicts and relaxations also allows a user to ask pointed questions concerning the feasibility of an instance of an ARCPSP problem. Given a feasible instance of a problem, such as Example 1, specific questions may be asked about infeasible modifications of the problem. The modification in Example 2 is gotten by extending the lengths of the laboratory tasks. Hence, the explanation in FIG2 may be interpreted as an answer to the question: "why can the laboratory tasks not have a duration longer than 120 minutes?" In the following sections we explore two variations of the subset enumeration algorithm to generate higher-level descriptions of infeasibility. This is accomplished by pushing every constraint to the and populating the foreground with a fresh set of Boolean variables. Then, a set L of constraints can be constructed that encodes a logical relationship between the new symbolic variables in the foreground and the actual constraints in the . The set L of logical relations linking symbolic variables to the constraints also becomes part of the . Then, only the set of Boolean variables remains to be enumerated over in the foreground. In practice, this can be accomplished by replacing the Boolean lattice outlined in Section 5 by the symbolic lattice composed of the new variables. This enables the generation of explanations concerning these symbolic variables, which is dependent on the relationship L.Depending on what kinds of constraints populate the foreground, the size of the Boolean lattice which needs to be enumerated over can be greatly reduced. This reduces the number of calls that need to be made to the SMT solver before arriving upon conflicts and relaxation. Additionally, these type of explanations can reduce redundancies and produce more compact descriptions of infeasibility. The following section outlines how this concept can be applied to generate minimal conflicts and relaxations of sets of tasks. Task explanations can be generated by replacing the foreground (and hence, the power set lattice) with a set of variables representing individual tasks. We introduce a Boolean variable for each task and separate the constraints of the ARCPSP into two classifications: individual and relational. Constraints FORMULA7 and FORMULA8 as well as constraints FORMULA0 and FORMULA0 are individual constraints, involving only a single task. Constraints, FORMULA10, and are relational constraints, involving multiple tasks. We then encode the constraint that the truth of each task's Boolean variable J j implies the truth of every one of its individual constraints J j =⇒ I j where I j represents a conjunction of the task's timeframe, compatibility, and feasibility Hence, individual constraints need only be satisfied if their associated task's Boolean variable is true and relational constraints need only be satisfied if all of their associated tasks' Boolean variables are true. Through this logical relationship, we can now enumerate over these Boolean variables, each of which conceptually represents a task. As the Boolean variables are toggled on and off, the associated constraint lattice becomes constrained as if the schedule had been constructed with only that subset of tasks. Executing the enumeration algorithm over this modified foreground for the same overconstrained problem (Example 2) produces a similar set of conflicts and relaxations, part of which is shown in FIG3. The full explanation includes 3 minimal relaxations and 17 minimal conflicts in total. It took 4.03 seconds to compute the minimal forbidden sets and 0.55 seconds to compute the full explanation. Here, the task and constraint explanations are quite similar, but this is not always the case. The task explanations can often be much more compact than the constraint explanations because each variable represents many constraints. For similar reasons, the number of total conflicts and relaxations is often greatly reduced. Because of this, task explanations can give more straight forward explanations for the over-constrained problem, but they lack the granularity of the constraint explanations. For example, with the constraint explanations we were able to diagnose that the precedence between the lab tasks was creating an issue rather than a resource or other constraint. The task explanations leave out this detail, sacrificing expressibility for interpretability. In order to draw explanations back to a high-level interpretation of the problem, the foreground can be replaced by a set of human-written specifications. This further reduces the size of the power set lattice that is constructed out the foreground and reduces redundancy in the generated conflicts and relaxations. Tasks are often formed in groups which share certain scheduling specifications. For example, the meal tasks in Example 1 all share the same parameters except that they are assigned to unique astronauts. When Example 1 was described, such similar tasks were naturally formulated in different categories (e.g., meal, weights, etc.). Hence, specifications for tasks may be more succinctly expressed by making use of these similarities. An informal, human written list of constraints specifying Specifications Confl 1 {Lab tasks must happen in sequence, the scheduling horizon is 6am to 6pm} Relax 1 {Lab tasks must happen in sequence} Relax 2 {The scheduling horizon is 6am to 6pm} the parameters of Example 1 could be as follows:-The scheduling horizon is 6am to 6pm.-Meal tasks must be scheduled between 1pm and 2pm.-Lab tasks must happen in sequence.-Each meal/treadmill/weights task must be assigned to a different astronaut.-Weights/treadmill tasks cannot happen 60 minutes before pre-sleep.-No more than 1000 units of power may be drawn at once.-The treadmill tasks require 200 units of power.-Tasks EVA SUIT TEST, HW GATHER, C-SAT, and SAMP require 400, 400, 500, and 400 units of power.-There is only one set of weights and treadmill equipment. Then a relevant logical relationship may be drawn back to the actual set of constraints for each such specification. For example, the precedence relations between the lab tasks could be related through:"Lab tasks must happen in sequence."(LAB 0, LAB 1) precedence (LAB 1, LAB 2) precedence (LAB 2, LAB 3) precedence (LAB 3, LAB 4) precedence (LAB 4, LAB 5) precedence (LAB 5, LAB 6) precedence Similar constraints may be encoded for the rest of the specifications, which compose the set L linking the human written specifications to the actual constraints of the problem. This construction allows the generation of specification explanations. The specification explanation for Example 1 is displayed in FIG4. Notice the greatly reduced size of the specification explanation. Unlike the constraint and task explanation, the specification explanation does not suffer from producing many redundant conflicts and relaxations. A fundamental trade-off exists between the expressibility and interpretability of different kinds of explanations. Low-level explanations involving constraints provide detailed reasons for infeasibility but may be difficult for a human user to parse or understand. In contrast, because the high-level specification explanations correlate directly with the types of constraints which a human planner may think in, they potentially provide more direct and concise information to the user. However, they lack the fine tuned granularity of information that constraint and task explanations provide. For example, if only a single precedence constraint between the laboratory tasks was causing an issue, the specification explanation would obscure which of the constraints is responsible. We introduced the agent resource-constrained project scheduling problem (ARCPSP) along with an associated difference logic encoding. We proposed a general framework for generating minimal conflicts and minimal relaxations based on the MARCO algorithm and demonstrated how it could be used to generate varying types of descriptions for why infeasibility is occurring in instances of the ARCPSP. The framework outlined in this paper is general enough to be applied to constraint satisfaction formulations for various other scheduling and planning problems. These ideas may potentially be further extended to different kinds of formal languages, such as linear temporal logic, that are used to describe planning problems. In an interactive system, such as a scheduling software, when a user attempts to make an infeasible modification, it may be useful to generate a reason for the infeasibility in real time. Similarly, a user could query whether a modification to a feasible schedule would preserve feasibility and, if not, why not? Explanations similar to the ones constructed throughout this paper may likely be used to such an effect. Investigating methods for synthesizing natural language sentences out of the explanations is also subject to future research. Following the goal of QuickXplain BID5, given a partial ordering of constraint or task importance, preferred conflicts and relaxations may be explored earlier and full explanations may list conflicts and relaxations in preferential order. Such functionality would be especially useful in cases for which generating the full explanation is intractable. A preferential ordering of explanations may be achieved by adding and removing constraints during the Grow and Shrink steps based on the constraint preference ordering. Similarly, methods for enumerating disjoint (or otherwise distinct) conflicts may also be useful for producing a representative set of conflicts as concisely as possible. Currently, the most limiting bottleneck for scaling to larger problem instances comes from the number of minimal forbidden sets which can grow exponentially with the number of tasks. Certain lazy clause generation algorithms (Laborie 2003) may be used to represent resource constraints in a more efficient manner. Such representations may also be adapted to implement consumable resources in an explainability setting. | We develop a framework for generating human-understandable explanations for why infeasibility is occurring in over-constrained instances of a class of resource-constrained scheduling problems. | 1,208 | scitldr |
Adversarial perturbations cause a shift in the salient features of an image, which may in a misclassification. We demonstrate that gradient-based saliency approaches are unable to capture this shift, and develop a new defense which detects adversarial examples based on learnt saliency models instead. We study two approaches: a CNN trained to distinguish between natural and adversarial images using the saliency masks produced by our learnt saliency model, and a CNN trained on the salient pixels themselves as its input. On MNIST, CIFAR-10 and ASSIRA, our defenses are able to detect various adversarial attacks, including strong attacks such as C&W and DeepFool, contrary to gradient-based saliency and detectors which rely on the input image. The latter are unable to detect adversarial images when the L_2- and L_infinity- norms of the perturbations are too small. Lastly, we find that the salient pixel based detector improves on saliency map based detectors as it is more robust to white-box attacks. Adversarial examples highlight a crucial difference between human vision and computer image processing. Often computers fail to understand the relevant characteristics of an image for classification or fail to generalize locally, i.e., misclassify examples close to the training data . Attacks exploit this property by altering pixels the classifier heavily relies on -pixels which are irrelevant to humans for object recognition. As a consequence, adversarial perturbations fool classifiers while the correct class remains clear to humans. Saliency maps identify the pixels an image classifier uses for its prediction; as such, they can be used as a tool to understand why a classifier is fooled. Building on this concept, researchers have shown qualitatively that adversarial perturbations cause a shift in the saliency of classifiers . Figure 1 shows examples of a natural image and corresponding adversarial images, each above their respective saliency maps. The saliency maps corresponding to adversarial images show perceptible differences to that of the original image, even though adversarial images themselves often seem unperturbed. For the original image, the saliency map shows that the classifier focuses on the four (and a couple of random pixels on the left). We observe that for the adversarial images, the classifier starts focusing more on irrelevant aspects of the left side of the image. There is ample research into different techniques for finding saliency maps (see e.g. ; ; ; ; ; ; ;). However, not all saliency maps are equally informative . For example, the Jacobian 1 can be used to determine the saliency of a pixel in the classification of the image (b;). As the Jacobian is often used to generate adversarial examples, intuitively, we expect that it can be used effectively to detect adversarial perturbations. propose a defense to this effect: they determine whether an input is adversarial, given the Jacobian-based The top is the input image and the bottom shows the corresponding saliency map. In the second row, lighter colours correspond to higher saliency (black corresponds to a saliency of 0, the lowest possible value). The classifier predicts (from left to right) the images as: 4, 9, 9, 8, 9, 9. Note the stark difference between the saliency masks of the original image and those of the adversarial examples. saliency map concatenated with the image. However, as shown qualitatively by , gradients are not always able to capture differences between adversarial images and natural images (for an example see Figures 7 and 8 in Appendix D). 2 Here we inspect the proposed Jacobian-based approach and show that only the concatenated input affects the technique's performance in detecting adversarial examples, with the Jacobian having no effect. While gradients may not be informative for detection, saliency should be an effective tool for detecting adversarial images. In our analysis, we use more powerful model-based saliency techniques and show that the magnitude of the shift of the saliency map due to adversarial perturbations often exceeds the L 2 distance between the saliency maps of different natural images. Building on this , we consider two different possible effects adversarial perturbations might have on the classifier: 1. They might cause the classifier to focus on the wrong pixel locations 2. They might change the pixel values of salient pixels Based on these hypotheses, we employ two CNN classifier architectures to detect adversarial images. Claim can be captured by shifts in saliency maps, as previously considered by. In this work, we extend on their analysis 3 by proving the defensive capability of our model-based saliency against difficult black-box attacks, such as C&W and DeepFool 4, as well as white-box adversarial attacks. By considering claim, we demonstrate that incorporating pixel values improves the performance of the classifier when shifts in saliency maps do not suffice to capture adversarial perturbations. We also show that our salient-pixel based defense generalizes well (detecting stronger attacks when trained on weaker attacks) and is more robust than the saliency map defense against white-box attacks. Lastly, we demonstrate that saliency can be used to detect adversarial examples generated by small perturbations, contrary to other defenses, which exhibit threshold behavior: i.e., when the adversarial perturbation is too small, other defenses (specifically ;) are unable to detect the adversarial images. Saliency maps and adversarial perturbations have similar mathematical formulations and derivations. Both are computed by investigating the relation between the values of pixels and the classification score. Adversarial examples are found by deriving the minimal perturbations required to change the classification of an image. Saliency is computed by finding the pixels used by the model 2 show that gradient-based heat maps are less effective than other saliency methods in detecting adversarial perturbations generated using BIM . 3 Their main contribution is that saliency maps generated by different techniques are not equally effective in capturing changes due to adversarial perturbations (produced using BIM . 4 These attacks generate smaller L2 perturbations, making them more difficult to detect. The perturbation size used by can likely still be detected by a simple classifier that trains on images. to determine the class of an object . Saliency maps can be found by considering the smallest part of an image that is sufficient for a correct classification, known as the smallest sufficient region (SSR), or whose removal is sufficient for an incorrect classification, known as the smallest destroying region (SDR) . Observe that the latter definition of saliency is very close to that of adversarial examples. Mathematically, both saliency maps and adversarial perturbations can be derived in a similar fashion. Consider adversarial examples. The general formulation of an adversarial attack can be summarized as follows: where x is the natural image, r is the adversarial perturbation, y is the correct class, and y is an incorrect class. Due to the non-linearity of NNs, solving the above problem requires non-linear optimization. Therefore, in practice several different approaches to solving the above formulation have been implemented. For example, set r = εsign(δx). Similarly, saliency can be computed using the forward derivative δx (b;). Previous research has already started investigating the relation between saliency and adversarial examples. This includes: Using saliency to attack Researchers have devised adversarial attacks that use saliency (b;). The key idea is to use saliency to determine the pixel that is most sensitive to perturbation iteratively. The main benefit is that fewer pixels are perturbed -often perturbing as few as 4% of the pixels suffices to change the classification of the image (b). Using saliency to defend introduce a method that detects adversarial perturbations by using heat-map visualizations of the predicted class. However, in their analysis, they only use BIM , which is easily detected. hypothesize that there is a mismatch between the saliency of a classification model and the adversarial example. They propose a defense against adversarial attacks by training a classifier on images concatenated with their saliency map, which is computed by calculating the Jacobian of the classifier with respect to the image x, i.e., s x = ∇ x f (x). find that their method obtains a high accuracy (often near 100%) when detecting adversarial images generated by FGSM, MIM, and C&W attacks on MNIST, CIFAR-10, and 10-ImageNet. contradict these , and demonstrate that the gradients show imperceptible differences due to adversarial perturbations (see Figures 7 and 8 in Appendix D). Adversarial robustness and interpretability of models and 5 show that saliency maps can be used to explain adversary classifications. Both highlight an important trend: not all techniques used to compute saliency maps show shifts in saliency maps due to adversarial perturbations. shows that more robust models have more interpretable saliency masks. quantify the relation by investigating the alignment between the saliency map and the input image. In this section, we explain how we construct and evaluate our saliency-based adversarial example detectors. We train a convolutional neural network image classifier, which we target with black-box attacks; the architectures are summarized in Appendix A. We use cross-entropy loss and optimize the parameters using Adam ;;; b; b; , respectively). We use the implementation as provided in cleverhans (a). The hyper-parameters are summarized in Appendix B. To generate saliency masks, we adapt the method used by. Our reason is twofold: the technique computes high-quality saliency masks at a low computational cost. employ a U-Net with a novel loss function that targets SDR, SSR, mask sparsity, and mask smoothness. We adapt the original loss function to omit the total variational term, as mask smoothness is not required in our analysis. ) denote the generated map. First, the map average AV (f s) is used to ensure that the area of the map is small. Second, log(f c (Φ(x, f s))) is included to ensure that the salient pixels suffice to identify the correct class. Finally, f c (Φ(x, 1 − f s)) is included to ensure that the classifier can no longer recognize the class if the saliency map is removed. Therefore, our saliency loss function is: where f c is the softmax probability of the class c, Φ(x, f s) applies mask f s to image x, and λ i ≥ 0 are hyper-parameters. We adapt the PyTorch implementation provided by 6 and train the saliency model on standard, non-adversarial images only. For evaluation, we use the same saliency model for both natural and adversarial images. When generating the saliency maps for our images, we use the predicted classification for feature selection to prevent an information leak (which would occur if we use the true label). Our hypothesis is that if an image is adversarial, the classifier likely focuses on the wrong aspects or the pixels on which it focuses are misleading (due to the perturbed color or intensity) when classifying an image as adversarial. We consider two different cases by building classifiers for saliency maps and salient pixels. For both classifiers, we use the same architecture (and hyperparameters) as for the black-box image classifiers (as summarized in Appendix A). We build a detector based on the saliency maps of images as follows. First, we train a classifier and generate adversarial images for every natural image in the training dataset. Then we generate the saliency maps for the clean data {f s (X)} and adversarial images {f s (X adv)}. We build a binary detector for the saliency maps, which predicts whether the corresponding image is adversarial or natural. We abbreviate this defense as SMD (Saliency Map Defense). We do not concatenate the saliency maps to the input image. We construct a second classifier for the salient pixels. We follow the same steps as outlined in the previous section, aside from the final step. We define the salient pixels as f s (x) · x, where x is the image, f s (x) is the saliency map corresponding to x and · denotes the element-wise product. We abbreviate this defense as SPD (Salient Pixel Defense). Similarly to SMD, we do not concatenate the saliency maps to the input image. To benchmark our , we consider two baselines. First, we train a baseline classifier that classifies input as adversarial or natural based on the images alone. This allows us to evaluate the added benefit of using saliency maps. This method was implemented by. We abbreviate this defense as ID (Image Defense). Second, we compare our defense method with the saliency-based defense of (see Section 2). We abbreviate this defense as JSD, for Jacobian-based Saliency map Defense. In our implementation, we adapt the method of; we find that if we use f s (x) = ∇ x f (x) as the saliency map it leads to underflow, ing in a zero matrix. Therefore, instead we take the derivative with respect to the logits, i.e. f s (x) = ∇ x z(x). JSD is mathematically related to the other defenses. First, it is more general compared to ID: the filters of JSD can learn to ignore the Jacobian-based saliency, in which case the two methods are equivalent. Further, JSD is similar to SMD, as the filters can learn to ignore the image input. In this case, the only difference between JSD and SMD is that they use different techniques to derive saliency. However, JSD differs from SPD, as CNN filters cannot multiply one channel by another. We follow the evaluation protocol of and train each defense to detect adversarial images generated by a specific attack, thereby generating six different detection models (one for each black-box attack). To generate the training data, we generate one adversarial example for every clean image. The training data becomes [X, X adv], where X denotes the clean data and X adv denotes the adversarial data, and the labels are [1 n, 0 n], 1 n and 0 n are one-and zero-vectors of length n, respectively. We use the same training procedure and models, as summarized in Appendix A, and report the accuracy of the classifiers on the test dataset. We compare the performance of the models on MNIST, CIFAR-10 and ASSIRA (see ; ; , respectively). In addition to the two frequently used benchmarks, we consider the ASSIRA cats and dogs dataset 7 as it contains highquality images but is less computationally expensive than ImageNet. 8 Further details on the datasets can be found in Appendix A. Many defenses hold up against black-box attacks but often are unable to defend against white-box attacks (a). For this reason, we generate white-box attacks tailored to the defense strategy. Our white-box attacks are iterative gradient-based attacks, which target both the classifier and the defense. Inspired by FGSM, we can target the classifier f as and the defense d as where Clip clips the pixels to the pre-defined maximum range. Using the above idea, we iterate between Equations 3 and 4 to generate the white-box attack for ID (the defense based on image classification). We propose similar white-box attacks for the other defenses, as shown in Appendix C. We limit the number of iterations T to 5, as we find it to be adequate to generate a sufficiently strong attack and further increasing T does not improve the performance. Our method is similar to that of. They propose finding adversarial examples as: where in our case α = 0.5. The key difference is that we iterate between Equations 3 and 4, rather than applying 3 and 4 simultaneously. We find that this is more effective at targeting the defense, which is more difficult to fool than the original classifier. We start by assessing the shift in saliency maps generated by adversarial perturbations and then present the efficacy of the detector against different adversarial attacks. Details, such as attack success rate, can be found in Appendix B. We start by quantifying the shift in saliency maps due to adversarial perturbations; we compute the L 2 distance between saliency maps of a natural image and its corresponding adversarial image. As a baseline, we compare these values with the L 2 distance between two different natural images. These statistics are summarized in Table 5. For CIFAR-10 and ASSIRA, the L 2 -norm between the saliency maps of a natural image and its corresponding adversarial image is comparable to or larger than the L 2 distance between two different natural images. Using a Mann-Whitney U-test, we prove quantitatively that the shift is significant for most adversarial attacks on CIFAR-10 and ASSIRA images. This suggests that our saliency-based method is an effective way of capturing adversarial perturbations. Table 1: L 2 distance between saliency maps of different images (row labelled Different Images) and the saliency maps of natural images and the adversarial image (generated by the type of attack specified in the row). The entries correspond to MNIST/CIFAR-10/ASSIRA. The p-value is derived using the Mann Whitney U-test, where we test whether the sample of L 2 distances between a natural and adversarial image is from the same distribution as different images. We use a nonparametric test to avoid assuming normality of the data. Figure 2 summarizes the performance of the defense models trained on a single adversarial attack on different adversarial attacks; the values and standard deviations can found in Appendix G. The overall performance of the model-based saliency defense suggests that saliency can be used to determine whether an image is adversarial. Salient Pixel Defense outperforms Saliency Map Defense Overall, SPD (shown in blue) outperforms the other defenses, suggesting that the salient pixels provide useful information to the detector. Further, our defense generalizes well: even when trained on a weaker attack, SPD is able to detect stronger attacks. Both baseline methods, ID and JSD, only generalize well when trained on a stronger attack. When trained on a weaker attack, they are not able to detect stronger adversarial attacks. Worse generalization on JSMA We observe a drop in performance of the models when detecting JSMA, likely because JSMA is an L 0 -norm attack, which generates a different type of adversarial examples. This may suggest that defenses trained on a specific norm, only generalize well to other attacks generated by a norm that produces similar perturbations. FGSM, BIM, and MIM are L ∞ -norm attacks, and C &W and DF are L 2 -norm attacks. Both generate perturbations that are spread out over the entire image, contrary to L 0 norm attacks, which changes a few pixels using larger perturbations. Threshold Behavior Both ID and JSD exhibit threshold behavior: they are unable to detect adversarial examples if the perturbation size is below a given threshold. For example, see the performance of both defenses on the ASSIRA dataset. There is a strong correlation between detection accuracy and perturbation size, as measured by L ∞ and L 2 (see Table 2). ID is able to detect all adversarial images for which the perturbation size is either L 2 > 0.027 or L ∞ > 0.50, such as FGSM and JSMA. 10 However, the perturbations are much smaller for DF and CW, making these attacks harder to detect. The threshold appears to occur around L 2 = 0.025, as ID can sometimes detect the FGSM perturbations, generated with this size. This observation is in line with the of , who find that ID is highly efficient at detecting adversarial images with perturbations of ε ≥ 0.03 but unable to detect adversarial perturbations generated using ε = 0.01 (using FGSM for images scaled between 0 and 1), obtaining an accuracy of 50.0% in the latter case. 11 9 We observe that JSD performs similarly, although sometimes worse, compared to ID. Theoretically, the parameter space of ID is a subset of the parameters of JSD. The additional input (the Jacobian) makes the model more difficult to train. Therefore, the difference in can be attributed to training: the model is more difficult to train due to the increased number of parameters and does not learn to ignore the additional input. 10 FGSM is known to generate large perturbations. The perturbations for JSMA are relatively large as the attack minimizes the L0 norm, thereby perturbing as few pixels as possible, but by a large amount. 11 Our perturbations for FGSM are larger than 0.01 to ensure that FGSM is sufficiently strong (see Appendix B for a summary of the attack success rates). Table 3 summarizes the performance of different defenses against our white-box attack. Our whitebox methods are highly effective in fooling the classifier as well as the defenses for MNIST and ASSIRA, as shown by the before adversarial training . The white-box attack is unable to fool the detector for CIFAR-10 successfully. Next, we perform adversarial training: we iteratively train the detectors against the white-box attack and allow the white-box attack access to the new defense. The white-box attack no longer successfully defeats SPD, which becomes more robust against the attack, whereas SMD is not able to become robust against the white-box attack. In our analysis, we ascertain that the saliency maps of adversarial images differ from those of natural images. Further, we show that salient pixel based defenses perform better than a saliency map defense. When trained on a single black-box attack, our method is able to detect adversarial perturbations generated by different and stronger attacks. We show that gradients are unable to capture shifts in saliency due to adversarial perturbations and present an alternative adversarial defense using learnt saliency models that is effective against both black-box and white-box attacks. Building on the work of , we further establish the notion of threshold behavior, showing that the trend depends on the L 2 and L ∞ -norms of the perturbations and therefore also prevails when using other methods (JSD) and across different attacks. Future work could further investigate the performance of the defense in different applications. For example, as our method runs in real-time, it could be used to detect adversarial perturbations in video to counter recent attacks ). A ARCHITECTURES, HYPER-PARAMETERS AND DATA Figure 3: ASSIRA, CIFAR-10, and MNIST image classifier architecture and hyper-parameters. The first entry corresponds to the first layer, and the table proceeds chronologically until the last layer. Parameters f, k, p, s and n represent the number of filters, kernel size, pooling size, stride, number of filters, respectively. If stride is omitted, it is set to 1. All classifiers have a final softmax activation. We apply drop-out before every dense layer. Using a validation set, we experimented with different drop-out rates between 0.3 and 0.7 and found that the rate δ = 0.6 was optimal. We use a ReLu activation for the penultimates layers and a softmax activation for the final layer. We train the model for 10 epochs on batches of size 50. We compare the performance of the models on MNIST, CIFAR-10 and ASSIRA (see ; ; , respectively). For MNIST and CIFAR-10, we use the standard train and test splits, and for ASSIRA, we use 3, 000 images. We use 10% of the training data for the validation set, and re-train on the full training dataset once hyper-parameters were selected. Further experimentation of ID and JSD architecture We further experiment with the architectures of ID and JSD to determine whether the observed performance was the of the architecture. In particular, we considered the adjustments as summarized in Table A; however, we found that the changes did not improve performance. In this section, we present the black-box adversarial attack hyper-parameters (see Figure 4), the success rates of the different adversarial attacks (see Table 4) and an example of an adversarial image generated by the various black-box attacks (see Figure 5). et al., 2016a) are used. ε is the maximum perturbation allowed and ε i is the maximum perturbation allowed in an iteration. We use different hyperparameters for the MNIST and CIFAR-10 to ensure the attack is sufficiently strong. MNIST and CIFAR-10 Figure 5: Example of an Adversarial Image for the MNIST dataset. From top to bottom: the top row is the set of images; the bottom row shows the size of the noise added. Gray indicates no change, whereas white indicates that the image has been made lighter, and black indicates that the image has been made darker. As MNIST images are gray-scale low-resolution images, the adversarial perturbations are perceptible to the human eye. Nevertheless, the correct classification of the image is still clearly 4. However, the classifier predicts (from left to right) the images as: 4, 9, 9, 8, 9, 9. Further, we observe that the perturbations of FGSM, BIM, and MIM are more visible than those of C & W and DF. Algorithm 1 White-box attack for JSD 1: x adv ← x 2: for t = 0: T do 3: for j = 1: n do if x adv does not fool the classifier then 5: x adv ← x adv + εsign(∇f (x adv, y)) Algorithm 1 provides the white-box attack for JSD. As mentioned in Section 2, JSD concatenates the image with its saliency map (computed as the Jacobian) and uses this as an input to the classifier. Algorithms 2 and 3 provide the white-box attacks for our defenses: SPD and SMD. The function f s corresponds to generating the saliency map using the method introduced by. Their method returns a two-dimensional saliency map. However, as the image is three dimensions, we expand the last dimension and stack the map to match the number of channels (n c) of the image. In doing so, we assume that the saliency is constant along depth. Algorithm 2 White-box attack for SPD 1: x adv ← x 2: for j = 1: n do 3: for t = 1: T do if x adv does not fool the classifier then 5: end if 7: if sp(x adv) does not fool the detector then 10: if n c > 1, repeat r along the last dimension until it matches n c 12: x adv ← Clip(x adv + r) if s adv does not fool the detector then 9: r ← εsign(∇d(s adv, y)) 10: if n c > 1, repeat r along the last dimension until it matches n c 11: The first row shows the natural and adversarial images, and the second row shows their respective saliency maps. There are no perceptible differences between the saliency map of the original image and adversarial images generated using MIM, C&W, and DF. Further, we observe that for FGSM and JSMA, the gradients are all zero-valued. This is a second drawback of using gradients-they are unstable and generate uninformative saliency maps due to underflow. E L 2 DISTANCES BETWEEN SALIENCY MAPS CORRESPONDING TO ADVERSARIAL IMAGES GENERATED BY DIFFERENT ATTACKS F SINGLE BLACK-BOX ADVERSARIAL ATTACK DETECTOR Table 6 summarizes the accuracies of the different defenses when training a single detector against a combination of different types of black-box attacks. All methods perform relatively similarly as when trained against a single defense, obtaining slightly worse performances than when trained against a specific adversarial attack. This is useful in practice when it is unclear which adversarial attack is used. | We show that gradients are unable to capture shifts in saliency due to adversarial perturbations and present an alternative adversarial defense using learnt saliency models that is effective against both black-box and white-box attacks. | 1,209 | scitldr |
Disentangled encoding is an important step towards a better representation learning. However, despite the numerous efforts, there still is no clear winner that captures the independent features of the data in an unsupervised fashion. In this work we empirically evaluate the performance of six unsupervised disentanglement approaches on the mpi3d toy dataset curated and released for the NeurIPS 2019 Disentanglement Challenge. The methods investigated in this work are Beta-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and Beta-TCVAE. The capacities of all models were progressively increased throughout the training and the hyper-parameters were kept intact across experiments. The methods were evaluated based on five disentanglement metrics, namely, DCI, Factor-VAE, IRS, MIG, and SAP-Score. Within the limitations of this study, the Beta-TCVAE approach was found to outperform its alternatives with respect to the normalized sum of metrics. However, a qualitative study of the encoded latents reveal that there is not a consistent correlation between the reported metrics and the disentanglement potential of the model. Unsupervised disentanglement is an open problem in the realm of representation learning, incentivized around interpretability BID8 BID1. A disentangled representation is a powerful tool in transfer learning, few shot learning, reinforcement learning, and semi-supervised learning of downstream tasks (; BID9 BID1 .Here, we investigate the performance of some of the promising disentanglement methods from the family of variational autoencoders (VAE). The methods are evaluated based on five relatively established disentanglement metrics on the simplistic rendered images of the mpi3d toy dataset curated and released for the NeurIPS 2019 Disentanglement Challenge. To mitigate the sensitivity of the models to the initial state, as suggested by the findings of , an autoencoder model was pre-trained with the conventional VAE objective BID6 on the mpi3d toy dataset. This approach guaranteed that models did not collapse into a local minima with little to no reconstruction. It also facilitated the training process given the constraints on the length of training by the challenge. In this preliminary study, we implemented the variational objective functions proposed by the following methods: β-VAE BID4 ), β-TCVAE (, Factor-VAE BID5, Info-VAE BID11, DIP-I-VAE, and DIP-II-VAE BID7 .In β-TCVAE, the mutual information between the data variables and latent variables are maximized, while the mutual information between the latent variables are minimized. Defining x n as the nth sample of the dataset, the evidence lower bound (ELBO) of this objective can be simplified as follows 1 DISPLAYFORM0 where z j denotes the jth dimension of the latents. In the above equation, the first term is the reconstruction loss. The second term is the distance between the assumed prior distribution of the latent space and the empirical posterior latent distribution. The last term is an indication of the total correlation (TC) between the latent variables which is a generalization of the mutual information for more than two variables BID10. A total capacity constraint which limits the KL divergence between the posterior latent distribution and the factorized prior can encourage the latent representation to be more factorised. However, this will act as an information bottleneck for the reconstruction task and in a blurry reconstruction. Thus, progressively increasing the information capacity of VAE during training can help facilitate the robust learning of the factorized latents BID2. This is achieved by introducing the capacity term C and defining the distance between distributions as the absolute deviation from C: DISPLAYFORM0 Gradually increasing C has an annealing effect on the constraint and increases the reconstruction capacity of the model. For each learning algorithm, the hyper-parameter sub-spaces were independently searched. However, in order for the reported here to be comparable, the hyper-parameters were kept intact in between the following experiments. The input images were 64 × 64 pixels and the latent space was of size 20. The model capacity parameter, C, was initiated at zero and gradually increased up to 25 over 2000 iterations. Learning rate was initiated at 0.001 and was reduced by a factor of 0.95 when the loss function (Equation) did not decrease after two consecutive epochs, down to a minimum of 0.0001. Batch size was set to 64. Optimization was carried out using the Adam optimizer with the default parameters β1 = 0.9 and β2 = 0.999. The network architectures and other hyper-parameters are detailed in Appendix A.The trained models were evaluated based on five evaluation metrics, namely, DCI, FactorVAE metric, IRS, MIG, and SAP-Score. Results of these evaluations are presented in TAB0. The non-ignored latent variables of each method are traversed and the are visualized in Appendix B. Moreover, the evaluation logs during model training are visualized in Appendix C.All the models and experiments were implemented using the PyTorch deep learning library and packaged under the Disentanglement-PyTorch repository https://github.com/ amir-abdi/disentanglement-pytorch 2. In this work we compared the degree of disentanglement in latent encodings of six variational learning algorithms, namely, β-VAE, Factor-VAE, DIP-I-VAE, DIP-II-VAE, Info-VAE, and β-TCVAE. The empirical TAB0 point to β-TCVAE being marginally the superior option and, consequently, chosen as the best performing approach. However, a qualitative study of the traversed latent spaces (Appendix B) reveals that none of the models encoded a true disentangled representation. Lastly, although the DIP-VAE-II model is under performing according to the quantitative , it has the least number of ignored latent variables with a promising latent traversal compared to other higher performing methods (Appendix B). As a of these inconsistencies, we find the five metrics utilized in this study inadequate for the purpose of disentanglement evaluation. Among the limitations of this study is the insufficient search of the hyper-parameters space for all the six learning algorithms. Moreover, the NeurIPS 2019 Disentanglement Challenge imposed an 8-hour limit on the training time of the models which we found to be insufficient. This, while the maximum number of iterations was set to 200k in our experiments, this value was limited to 100k in the submissions made to the challenge portal.2. The repository will be publicly released upon the completion of the competition. The encoder neural network in all experiments consisted of 5 convolutional layers with strides of 2, kernel sizes of 3 × 3, and number of kernels gradually increasing from 32 to 256. The encoder ended with a dense linear layer which estimated the posterior latent distribution as a parametric Gaussian. The decoder network consisted of one convolutional followed with 6 deconvolutional (transposed convolutional) layers, with kernel sizes of 4, strides of 2, and the number of kernels gradually decreasing from 256 down to the number of channels of the image space. ReLU activations were used throughout the architecture, except for the last layers of the encoder and decoder networks. | Inadequacy of Disentanglement Metrics | 1,210 | scitldr |
Most of the prior work on multi-agent reinforcement learning (MARL) achieves optimal collaboration by directly learning a policy for each agent to maximize a common reward. In this paper, we aim to address this from a different angle. In particular, we consider scenarios where there are self-interested agents (i.e., worker agents) which have their own minds (preferences, intentions, skills, etc.) and can not be dictated to perform tasks they do not want to do. For achieving optimal coordination among these agents, we train a super agent (i.e., the manager) to manage them by first inferring their minds based on both current and past observations and then initiating contracts to assign suitable tasks to workers and promise to reward them with corresponding bonuses so that they will agree to work together. The objective of the manager is to maximize the overall productivity as well as minimize payments made to the workers for ad-hoc worker teaming. To train the manager, we propose Mind-aware Multi-agent Management Reinforcement Learning (M^3RL), which consists of agent modeling and policy learning. We have evaluated our approach in two environments, Resource Collection and Crafting, to simulate multi-agent management problems with various task settings and multiple designs for the worker agents. The experimental have validated the effectiveness of our approach in modeling worker agents' minds online, and in achieving optimal ad-hoc teaming with good generalization and fast adaptation. As the main assumption and building block in economics, self-interested agents play a central roles in our daily life. Selfish agents, with their private beliefs, preferences, intentions, and skills, could collaborate (ad-hoc teaming) effectively to make great achievement with proper incentives and contracts, an amazing phenomenon that happens every day in every corner of the world. However, most existing multi-agent reinforcement learning (MARL) methods focus on collaboration when agents selflessly share a common goal, expose its complete states and are willing to be trained towards the goal. While this is plausible in certain games, few papers address the more practical situations, in which agents are self-interested and inclined to show off, and only get motivated to work with proper incentives. In this paper, we try to model such behaviors. We have multiple workers and a manager, together to work on a set of tasks. The manager gets an external reward upon the completion of some tasks, or one specific task. Each worker has a skill set and preference over the tasks. Note that their skills and preferences may not align with each other (Fig. 1(a) ), and are not known to the manager (Fig. 1(b) ). Furthermore, manager may not get any external reward until a specific task is complete, which depends on other tasks. By default, the self-interested workers simply choose the most preferred tasks, which is often unproductive from the perspective of the entire project. Therefore, the manager gives additional incentives in the form of contracts. Each contract assigns a goal and a bonus for achieving the goal to a worker. Figure 1: Illustration of our problem setup. Workers have different skills (abilities for completing tasks) and preferences (which tasks they like) indicated by the bar charts. They are self-interested and perform the tasks they prefer the most. To achieve optimal collaboration, a manager has to first infer workers' minds, and assigns right bonuses to workers for finishing specified tasks in the form of contracts. Consequently, workers will adjust their intentions and work together accordingly. E.g., workers in the figure initially all want to do task B. To finish all tasks, the manager has to pay more bonus to worker 1 and 2 so that they will perform A and C respectively. With the external incentives, workers may choose different goals than their preferences. Upon completion of assigned goals, the manager receives the rewards associated with those goals and makes the promised payments to the workers. To generate optimal contracts, the manager must infer the workers' minds and learn a good policy of goal and reward assignment. Conventional approaches of mechanism design tackle similar problems by imposing strong assumptions (e.g., skill/preference distributions, task dependencies, etc) to find an analytic solution. In contrast, we aim to train a manager using reinforcement learning to i) assess minds of workers (skills, preferences, intentions, etc.) on the fly, ii) to optimally assign contracts to maximize a collaborative reward, and iii) is adapted to diverse and even evolving workers and environments. For this, we propose a novel framework -Mind-aware Multi-agent Management Reinforcement Learning (M 3 RL), which entails both agent modeling for estimating workers' minds and policy learning for contract generation. For agent modeling, we infer workers' identities by their performance history, and track their internal states with a mind tracker trained by imitation learning (IL). For contract generation, we apply deep reinforcement learning (RL) to learn goal and bonus assignment policies. To improve the learning efficiency and adaptation, we also propose high-level successor representation (SR) learning BID17 and agent-wise -greedy exploration. As a proof of concept, we evaluate our approach in two environments: Resource Collection and Crafting in 2D Minecraft, to simulate multi-agent management problems. The setup and underlying assumptions are designed to mimic real world problems, where workers are not compelled to reveal their true preferences and skills, and there may be dependency between tasks ing in delayed and sparse reward signals. Workers may also be deceitful (e.g., accepting a contract even when the assigned goal is unreachable). Our experiments demonstrate that the manager trained by our approach can i) estimate the mind of each worker from the recent behaviors, ii) motivate the workers to finish less preferable or intermediate tasks by assigning the right bonuses, iii) is adaptive to changing teams, e.g., change of members and/or change of workers' skills and preferences, iv) and has good generalization in different team sizes and novel environments. We have conducted substantial ablation studies by removing the key components, including IL, SR, agent-wise -greedy exploration, and performance history. Our approach shows a consistent performance in standard settings as well as in more challenging ones where workers' policies are stochastic and sub-optimal, or there are multiple levels of bonuses required to motivate workers. Multi-agent reinforcement learning. For collaboration problems, common multi-agent reinforcement learning BID19 BID7 usually trains agents BID26 BID11 BID28 BID27 BID20 so that they will jointly maximize a shared reward. There also have been work on contributing different credits to agents by factorized value functions BID16 BID12 BID35 BID30, but the spontaneous collaboration assumption is still required. In contrast, we instead train a manager to manage multiple self-interested workers for an optimal collaboration. Principal-agent problems. Our problem setup is closely related to principal-agent problems BID18 ) (or moral hazard problems BID14) in economics. Our manager and workers can be considered as the principal and agents respectively, where agents and principal have different objectives, and the principal needs to provide the right incentives to ensure that the agents make the best choices for what the principal delegates. These problems face similar technical challenges as our problem setup, e.g., information asymmetry between principals and agents, how to setup incentive cost, how to infer agents types, how to monitor their behaviors, etc. Traditional approaches in economics BID25 BID15 BID32 build mathematical models to address these issues separately in stateless games, often with the assumption that the utility functions and the behavior patterns of the agents are known, leading to complicated models with many tunable parameters. In comparison, our paper provides a practical end-to-end computational framework to address this problem in a data-driven way without any assumption about the agents' utilities and their decision making processes. Moreover, this framework is adaptive to changes of agents preferences and capabilities, which very few papers in economics have addressed. We also evaluate our approach in more complex game settings than the ones in the current economics literature. Mechanism design. Similar to our problem setting, mechanism design also tackles problems where agents have different and private preferences BID24 BID8. Its core idea is to set up rules so that the agents will truthfully reveal their preferences for their own interests, and ultimately an optimal collective outcome can be achieved. Our work differs from mechanism design in several ways. First, in addition to preferences, we also acknowledge the fact that agents may have different skills. Second, mechanism design does not consider sequential decision problems, whereas we have to dynamically change the contracts over time. Optimal reward design. The contract generation in our work can be seen as reward design. Some prior work has proposed optimal reward design approaches BID41 BID40 BID33 BID31, where a teacher designs the best reward so that the student will learn faster or alter its policy towards the target policy. In contrast, we try to use deep RL to train optimal reward design policies to manage multi-agents in more complex tasks. Meta-learning. Our work also resembles meta-learning BID37 BID10, which typically aims at learning a meta strategy for multiple tasks BID22 BID9 BID13 BID38 BID39 BID3 with good sample efficiency, or for a fast adaptation BID0. The meta-learning in this paper is for addressing the problem of ad-hoc teaming BID6 BID34 by training from a limited set of worker population. Theory of Mind. Our agent modeling is inspired by the prior work on computational theory of mind, where both Bayesian inference BID4 ) and end-to-end training BID29 have been applied to understand a single agent's decision making by inferring their minds. In this work, we extend this to optimal multi-agent management by understanding agents' minds. In an environment, there is a set of goals G corresponding to several tasks, N self-interested workers with different minds, and a manager which can observe workers' behaviors but is agnostic of their true minds. Different from the common Markov game setting for MARL in prior work BID19, we use an independent Markov Decision Process (MDP), i.e., S i, A i, R i, T i, ∀i ∈ N, to model each worker, where S i and A i are the state space and action space, R i: S i × G i → R is the reward function, and T i: S i × A i → S i is the state transition probabilities. For achieving goals, a worker has its own policy π i: S i × G i → A i. We define the key concepts in this work as follows. A contract is a combination of goal and bonus assignment initiated by the manager to a specific worker. For simplicity, we consider discrete bonuses sampled from a finite set B. Thus, for worker i at time t, it will receive a contract defined as (g Worker's mind. We model a worker's mind by its preferences, intentions, and skills. We do not study worker agents' beliefs in this paper, which we leave as future work. Preference. A worker's preference is formally defined as its bounded internal utilities of achieving different goals, u i = (u ig : g ∈ G), where 0 ≤ u ig ≤ u max. Combined with received contract, the worker agent's reward function can be defined as DISPLAYFORM0 where s g is the goal state. Intention. The intention of a worker is the goal it is pursuing at any time, i.e., I t i ∈ G, which is not fully revealed to the manager. Based on the reward defined in Eq., there are multiple ways to choose the goal. For a rational worker who is clear about its skills, it will choose the goal by maximizing expected return. I.e., I DISPLAYFORM1 ], where 0 < γ i ≤ 1 is its discount factor. However, this requires a worker to have a good estimate of its skills and to be honest, which is not always true. E.g., a worker may want to pursue some valuable goal that it can not reach. So an alternative way is to maximize the utility instead: DISPLAYFORM2. This will make a worker's behavior more deceptive as it may agree to pursue a goal but will rarely produce a fruitful . In this work, we focus on the second way to achieve a more realistic simulation. After determine which goal to pursue, a worker will decide whether to sign the assigned contact. We denote this by d t i ∈ {0, 1}, where d t i = 1 means that worker i signs the contract given at time t. Skill. The skill of a worker is jointly determined by its state transition probabilities T i and its policy conditioned on its intention, i.e., π i (·|s DISPLAYFORM3 Manager's objective. The manager in our setting has its own utility v = (v g : g ∈ G), where v g ≥ 0 is the utility of achieving goal g. To maximize its gain, the manager needs to assign contracts to workers optimally. For the sake of realism, we do not assume that the manager knows for sure if a worker agent is really committed to the assignment. The only way to confirm this is to check whether the goal achieved by the worker is consistent with its last assigned goal. If so, then the manager will gain certain reward based on its utility of that goal and pay the promised bonus to the worker. Thus, we may define the manager's reward function as: DISPLAYFORM4 where S t+1 = {s t+1 i: i = 1, · · ·, N} is the collective states of all present worker agents at time t + 1. The objective of the manager is to find optimal contract generation to maximize its expected return E[∞ t=0 γ t r t], where 0 < γ ≤ 1 is the discount factor for the manager. Note that the manager may get the reward of a goal for multiple times if workers reach the goal respectively. Population of worker agents. The trained manager should be able to manage an arbitrary composition of workers rather than only specific teams of workers. For this, we maintain a population of worker agents during training, and sample several ones from that population in each episode as the present workers in each episode. The identities of these workers are tracked across episodes. In testing, we will sample workers from a new population that has not been seen in training. Our approach has three main components as shown in Figure 2: i) performance history module for identification, ii) mind tracker module for agent modeling, and iii) manager module for learning goal and bonus assignment policies. We introduce the details of these three components as follows. To model a worker's mind, we first need to infer its identity so that the manager can distinguish it from other agents. Previous work BID29 typically identifies agents via their trajectories in recent episodes. This only works when diverse past trajectories of agents are available beforehand. However, this is impractical in our problem as the past trajectories of a worker depends on the manager's policy, and thus are highly correlated and can hardly cover all aspects of that agent. DISPLAYFORM0 Figure 2: Overview of our network architecture. In this work, we propose performance history for agent identification, which is inspired by the upper confidence bound (UCB) algorithm BID2 for multi-bandit arm (MAB) problems. Formally, the performance history of worker i is a set of matrices DISPLAYFORM0 is an empirical estimation of the probability of worker i finishing goal g within t steps after signing the contract if promised with a bonus of b. We discuss how to update this estimate in Algorithm 1. These matrices are then flatten into a vector and we encode it to a history representation, h i, for worker i. With identification, the manager uses an independent mind tracker module with shared weights to update its belief of a worker's current mental state online by encoding both current and past information: DISPLAYFORM1 t} is a trajectory of the worker's behavior and the contracts it has received upon current time t in the current episode. For contract generation, the manager has to consider all present workers as a context. Thus, we encode each worker's information and pool them over to obtain a context representation, i.e., In addition to learning policies for individual workers, we also want the manager to estimate the overall productivity of a team. A common choice in previous literature (e.g., BID20) is to directly learn a centralized value function based on the context. However, this is not informative in our case, as the final return depends on achieving multiple goals and paying different bonuses. It is necessary to disentangle goal achievements, bonus payments, and the final net gain. To this end, we adopt the idea of successor representation (SR) BID17; BID5 BID21, but use it to estimate the expectation of accumulated goal achievement and bonus payment in the future instead of expected state visitation. By defining two vectors φ g (c t) and φ b (c t) indicating goal achievement and bonus payment at time t respectively, we may define our high-level SR, Φ g and DISPLAYFORM0 We discuss the details in Appendix A.1. For a joint training of these three modules, we use advantage actor-critic (A2C) BID23 to conduct on-policy updates, and learn SR similar to BID17. In addition, we also use imitation learning (IL) to improve the mind tracker. In particular, we predict a worker's policy based on its mental state representation, i.e.,π(·|s), which is learned by an additional cross-entropy loss for action prediction. Section A.2 summarizes the details. As our experimental in Section 5 and Appendix C show, in difficult settings such as random preferences and multiple bonus levels, the policies based on the mental state representation trained with IL have a much better performance than the ones without it. As the manager is agnostic of workers' minds, it is important to equip the manager with a good exploration strategy to fully understand each worker's skills and preferences. A common exploration strategy in RL is -greedy, where an agent has a chance of to take random actions. However, this may cause premature ending of contracts where a worker does not have sufficient amount of time to accomplish anything. Therefore, we adopt an agent-wise -greedy exploration, where a worker has as a chance of to be assigned with a random goal at the beginning of an episode and the manager will never change that goal assignment throughout the whole episode. In this way, it is easier for a manager to understand why or why not a worker is able to reach an assigned goal. The details can be seen from the rollout procedure (Algorithm 1) in Appendix B. We introduce the general task settings as follows. Note that without additional specification, workers are implemented as rule-based agents (detailed in Appendix D.2). In Resource Collection, the goals are defined as collecting certain type of resources. There are 4 types of resources on a map FIG4 ) and the total quantity is 10. A worker can find any resources but only has the skills to dig out certain types of resources. Note that it may not be skilled at collecting its preferred resources. We consider three different settings:• S1: Each agent can collect up to three types of resources including its preferred type.• S2: Each agent can only collect one type of resource which may or may not be its preferred one.• S3: Similar to S2, except that an agent has a different random preference in each episode and thus its preference can not be inferred from history. A worker can take five actions: "move forward", "turn left", "turn right", "collect", and "stop", and its skill is reflected by the effect of taking the "collect' action. For workers, the internal utility of a resource is 1 if it is preferred; otherwise it is 0. The manager receives a reward of 3 for every resource collected under the contracts, and can choose to pay a worker with a bonus of 1 or 2. Different from previous work BID1 where all items can be directly crafted from raw materials, we consider three-level recipes FIG4 ): crafting a top-level item requires crafting certain intermediate item first. There are four work stations (colored blocks) for crafting the four types of items respectively. For the manager, each top-level item is worth a reward of 10, but collecting raw materials and crafting intermediate items do not have any reward. Note that certain materials are needed for crafting both top-level items, so the manager must strategically choose which one to craft. In each episode, there are raw materials sufficient for crafting one to two toplevel items. All collected materials and crafted items are shared in a common inventory. We define 8 goals including collecting raw materials and crafting items. Each worker prefers one of the collecting goals (the internal utility is 1), and is only capable of crafting one type of items. We expands the action space in Section 5.1.1 to include "craft", which will only take effect if it has the ability of crafting the intended item and there are sufficient materials and/or intermediate items. The manager can choose a bonus from 0 to 2 for the contracts, where 0 means no employment. For comparison, we have evaluated the following baselines:• Ours w/o SR: Learning a value function directly w/o successor representations.• Ours w/o IL: Removing action prediction loss.• Temporal -greedy: Replacing the agent-wise exploration with conventional -greedy exploration.• Agent identification using recent trajectories: Encoding an agent's trajectories in the most recent 20 episodes instead of its performance history, which is adopted from Rabinowitz et al. • UCB: Applying UCB BID2 by defining the management problem as N multi-armed bandit sub-problems, each of which is for a worker agent. In each MAB sub-problem, pulling an arm is equivalent to assigning a specific goal and payment combination to a worker agent (i.e., there are |G| · |B| arms for each worker agent).• GT types known: Revealing the ground-truth skill and preference of each worker and removing the performance history module, which serves as an estimation of the upper bound performance. During training, we maintain a population of 40 worker agents. In each episode, we sample a few of them (4 workers in Resource Collection and 8 workers in Crafting). All approaches we have evaluated follow the same training protocol. The learning curves shown in FIG5 demonstrate that ours consistently performs the best in all settings, and its converged rewards are comparable to the one trained using ground-truth agent types as part of the observations. Moreover, in more difficult settings, e.g., S3 of Resource Collection and Crafting, the benefits of IL, SR, agent-wise -greedy exploration, and the history representations based on the performance history are more significant. In particular, when there are tasks that do not have any reward themselves such as in Crafting, SR and IL appear to offer the most critical contributions. Without them, the network hardly gets any training signals. In all cases, the agent identification by encoding recent trajectories learns extremely slowly in Resource Collection and fails to learn anything at all in Crafting. In real world scenarios, the population of worker agents and their skills may evolve over time, which requires the manager to continuously and quickly adapt its policy to the unforeseeable changes through a good exploration. Thus we compare our agent-wise -greedy exploration with the temporal -greedy exploration in two cases: i) training with a population where workers' skills change drastically after 100,000 episodes (the manager does not know when and which workers' skill sets have been updated), and ii) testing with a team where 75% of the workers will be replaced with new ones after every 2,000 episodes. Both strategies keep the same constant exploration coefficient, i.e., = 0.1. To have a better sense of the upper bound in the testing case, we also show the performance of the baseline that knows ground-truth agent information where no exploration is need. The of the two cases are demonstrated in Figure 5 and in Figure 6 respectively. In the first case, there are moments when the significant change in a population's skill distribution (i.e., how many workers can reach a specific goal) will need the manager to greatly change its policy. E.g., the first two changes in Figure 5a in new types of resources being collected; the changes in Figure 5b force the team to craft a different type of top-level item. In such cases, our agentwise -greedy exploration significantly improves the learning efficiency and increases the converged rewards. When the change is moderate, the policy learned by ours is fairly stable. In the second case, the managers trained by the three methods achieve similar converged rewards in training. While the converged reward of our approach is slightly lower than the upper bound due to exploration, it allows the manager to quickly adapt itself to a new team where it has never seen the most team members. The temporal -greedy on the other hand never achieves a comparable reward even though its performance is comparable to ours when managing a fixed population. We also want the manager's policy to have good generalization in novel scenarios unseen in training, which, in our problems, has two aspects: i) generalization in different numbers of present worker agents, and ii) generalization in new environments. It can be seen from FIG7 that as the number of workers increases, the manager achieves higher reward until it hits a plateau. Our approach consistently performs better in all settings. It even gains higher rewards than the one with groundtruth does when there are fewer workers. We also add a few walls to create novel environments unseen in training. With the additional obstacles, workers' paths become more complex, which increases the difficulty of inferring their true minds. As suggested by Figure 8, the performance indeed decreases the most in S3 of Resource Collection where online intention inference is critical as the workers do not have fixed preferences. So far, we have only considered rule-based worker agents with deterministic plans. To see if our approach can handle stochastic and sub-optimal worker policies, we may randomize certain amount of actions taken by the workers (Figure 9) and train a manager with these random policies. When the randomness is moderate (e.g., ≤ 20%), the performance is still comparable to the one without random actions. As randomness increases, we start to see larger decrease in reward. In Crafting specifically, random policies make the workers unlikely to achieve assigned goals within the time limit, thus the manager may never get top-level items if the policies are too random. More . In addition to the main experimental discussed above, we further test our approach from different perspectives: i) showing the effect of the minimum valid period of a contract (i.e., constraints for the manager's commitment), ii) multiple bonus levels, and iii) training RL agents as workers. We summarize these in Appendix C. In this paper, we propose Mind-aware Multi-agent Management Reinforcement Learning (M 3 RL) for solving the collaboration problems among self-interested workers with different skills and preferences. We train a manager to simultaneously infer workers' minds and optimally assign contracts to workers for maximizing the overall productivity, for which we combine imitation learning and reinforcement learning for a joint training of agent modeling and management policy optimization. We also improve the model performance by a few techniques including learning high-level successor representation, agent-wise -greedy exploration, and agent identification based on performance history. Results from extensive experiments demonstrate that our approach learns effectively, generalizes well, and has a fast and continuous adaptation. We define two vectors indicating the goal achievement and bonus payment at time t: DISPLAYFORM0. Let w = (b : b ∈ B) be the weights for different bonus payments, then the reward for the manager at the current moment can be written as DISPLAYFORM1. Following the typical SR definition, we define our high-level SR as DISPLAYFORM2 and DISPLAYFORM3 Thus, the value function can be written as DISPLAYFORM4 A.2 DETAILS OF LEARNINGThe policy gradient for the goal assignment is: DISPLAYFORM5 where A(c t) is the advantage estimation defined as A(c DISPLAYFORM6 The successor representations may be updated by the following gradient: DISPLAYFORM7 For imitation learning, we use the following cross-entropy loss: DISPLAYFORM8 Note that the gradient from IL will be combined with Eq. 6, Eq. 7, and Eq. 8 to update corresponding parameters (see Algorithm 2 in Appendix B for details).In tasks where there the unknown dependency may introduce a large cost in the beginning of training, the manager's exploration may be restricted as the policy becomes too conservative in spending, which is common in many real world scenarios. To encourage exploration in these tasks, we adopt a two-phase learning curriculum. Namely, we optionally conduct a warm-up phase before the standard learning described above. In this warm-up phase, we give loans to the manager to cover its cost (i.e., setting the total payments to be zero when optimizing the networks). In practice, we apply this only to Crafting, where we set a fixed number of episodes at the beginning of training to be the warm-up phase. Note that this only apply to the optimization; we still need to deduct the payments from the rewards as the actually outcomes (this is equivalent to paying back the loans). We summarize the rollout algorithm and the learning algorithm in Algorithm 1 and Algorithm 2 respectively. The manager's commitment is defined as the shortest time a contract must remain unchanged, which essentially constrains how frequent the manager can change its goal and bonus assignment. While short-term commitment allows the manager to quickly update contracts once it has a better mind estimation or once a goal has been reached, long-term commitment often leads to a more accurate skill assessment when the tasks are difficult (e.g., crafting high-level items depends on the of other tasks and thus needs a longer time). This is supported by the in FIG9: shorter commitment works better in Resource Collection while Crafting needs a longer commitment. Note that the commitment constraint is 1 step and 10 steps for Resource Collection and Crafting respectively in all other experiments. In previous experiments, the internal utility of goals for a worker agent is either 0 or 1. Here, we sample the internal utility from 0 to 3. Consequently, the manager needs to select the right bonus from multiple choices to pay each worker (i.e., a bonus from 1 to 4 for Resource Collection and a bonus from 0 to 4 for Crafting). In Resource Collection, the manager will get a reward of 5 for every collected resource; in crafting, the reward for a top-level item is still 10. As shown in FIG10, the advantage of our approach is even more significant compared to the ones in single payment level. Finally, we train a population of 40 RL worker agents for Resource Collection, where each one is trained with only one goal, and for each goal we train 10 agents using different random seeds. This creates a population with similar skill distributions as in S2, but with very different policies. FIG11 suggests that training to manage RL agents is slower as their policies are less predictable and less rational, but our approach can still gradually learn a good policy whose performance is comparable to the one using rule-based worker agents. Performance History Module. We flatten the matrices in worker's performance history P i and concatenate them together to get a single vector. We then encode this vector into a 128-dim history representation h i.Mind Tracker Module. We represent the state of a worker by multiple channels corresponding to different types of items. We also use four channels to indicate its orientation. We augment the state with additional |A||G||B| channels, where a channel is either all ones or all zeros for indicating the action it takes, and the goal and bonus it receives. We then encode the state into a 128-dim hidden state by a convolutional layer with 64 channels and kernels of 1 × 1, a fully connected (FC) layer (128-dim), and an LSTM with 128 hidden units. We fuse this vector with the history representation h i. Specifically, we adopt an attention-based mechanism for the fusion, where we first get an attention vector (128-dim) from the history representation by an FC layer with sigmoid activation, and then do element-wise product between the attention vector and the hidden state from the LSTM. The fused vector becomes m t i. This can be formally written as m is element-wise product. We fuse it with the state using the same mechanism: f (φ(s) is the state encoding. By feeding the fused vector to an FC layer with softmax activation, we may get the predicted worker policy. Manager Module. For each worker, we concatenate its mind representation and history representation together and fuse it with the worker's state using the attention-based mechanism where the attention vector comes from the concatenated vector. By pooling over these fused vectors of individual workers, we can get the context vector, from which we construct the two successor representations by two separate FC layers. Here, we use average pooling, but one may also use other pooling mechanisms. Finally, for each worker, we concatenate the context vector with its fused vector we obtained before pooling, and consequently get the goal policy and bonus policy by two FC layers with softmax activation. All modules are trained with RMSProp BID36 ) using a learning rate of 0.0004. Each rule-based worker finds a shortest path to the nearest location related to a goal, and its skill is defined as the post effect of its "collect" and "craft" actions. In particular, for collecting certain resource/material, it will go to the closest one that has the same type as the goal indicates and is currently not being collected by other agents, whereas for crafting an item, it will go to the corresponding work station if it is currently unoccupied. If a worker can perform collecting tasks, then after it takes "collect" action, the item will be collected from the map and appears in the inventory; otherwise no real effect will appear. This applies to crafting tasks as well, except in crafting, task dependencies must also be satisfied before "craft" action can take real effect. When considering random actions, for each step, we sample a random action with the specified chance to replace the action from the rule-based plan. We implement all RL worker agents using the same network architecture, where an agent's state is augmented by additional channels to include the reward for each goal (i.e., |G||B| channels). We use a convolution layer with 64 channels and kernels of 1 × 1 to encode the state, and feed it to an 128-dim FC layer and then an LSTM with a 128-dim hidden state. We then predict the policy using an FC layer with softmax activation based on the hidden state from the LSTM. For each goal, we train 10 RL worker agents using 10 random seeds. For each episode, we randomly assign a reward from b ∈ B to an agent as the hypothetical reward it may receive from a manager. We then set the corresponding channel to be all ones and set the remaining |G||B| − 1 channels to be all zeros. Note that we assume all RL workers have the ability to perform "collect" and "craft" actions. | We propose Mind-aware Multi-agent Management Reinforcement Learning (M^3RL) for training a manager to motivate self-interested workers to achieve optimal collaboration by assigning suitable contracts to them. | 1,211 | scitldr |
Inferring temporally coherent data features is crucial for a large variety of learning tasks. We propose a network architecture that introduces temporal recurrent connections for the internal state of the widely used residual blocks. We demonstrate that, with these connections, convolutional neural networks can more robustly learn stable temporal states that persist between evaluations. We demonstrate their potential for inferring high-quality super-resolution images from low resolution images produced with real-time renderers. This data arises in a wide range of applications, and is particularly challenging as it contains a strongly aliased signal. Hence, the data differs substantially from the smooth inputs encountered in natural videos, and existing techniques do not succeed at producing acceptable image quality. We additionally propose a series of careful adjustments of typical generative adversarial architectures for video super-resolution to arrive at a first model that can produce detailed, yet temporally coherent images from an aliased stream of inputs from a real-time renderer. Learning expressive and stable representations is a goal that lies at the heart of a vast range of deep learning tasks (; ; . While typical recurrent architectures focus on feedback loops to form persistent latent-spaces , we show that for inference tasks where the is conditioned on a stream of inputs, these existing architectures unnecessarily complicate the learning task, and fail to reliably stabilize the inference. With our work, we propose a new type of connection for the very widely used building blocks of ResNet architectures that lets the network easily compare internal states in-place. The learned representation can then, e.g., yield a detailed image sequence with natural changes. We demonstrate this with a particularly challenging learning objective: we aim for the synthesis of detailed images from a stream of strongly aliased inputs. Specifically, we show that adversarially trained convolutional neural networks (CNNs) can be leveraged to produce detailed images from unfiltered, low-resolution images generated via point-sampling with a rasterization-based real-time renderer. Real-time graphics are the basis for a wide range of applications: Generating images with a sufficient resolution from low resolution, yet computationally light-weight renderings is a task that is, e.g., important for generating content for the high resolution screens of mobile devices, and is especially interesting for streaming services of games in order to compute the final resolution only on the client. Our work shares its goal with a variety of approaches that have been proposed for generating highquality images for raytracing algorithms and purely image-based super-resolution algorithms (; . Our architecture differs from previous works as the proposed recurrent connection allows the network to learn a temporally stable latent-space representation that does not negatively impact the residual flow of a ResNet architecture. Also, the temporal connections for deeper layers of the network are important for successful learning, as we will demonstrate below. While the basic concept of depth-recurrent connections could potentially be applied to a variety of sequence-based learning tasks, we focus on demonstrating its potential for pushing forward the limits of real-time rendering. Hence, we additionally outline a series of modifications to existing architectures which are crucial for achieving high quality of the strongly aliased input images from LR RDA modified TecoGAN re-trained DRR Figure 1 : Given a strongly aliased low-resolution input rendering with one sample per pixel, recurrent non-adversarial training ( with modifications for fair comparisons) produces blurry , and existing adversarial methods (, re-trained) introduce strong flickering artifacts. Trained on the same data, due to the proposed DRR connections our network infers more consistent spatio-temporal features (see the supplemental footage for a clear assessment of the temporal differences). typical real-time rendering pipelines. A typical input for our network is shown on the left of Fig. 1. This application scenario is especially challenging for CNNs, since it requires to work with images that need to be rendered at very high frame rates and, thus, exhibit severe aliasing due to point sampling and typically low resolutions. The aliasing not only distorts the spatial signal, but likewise affects the temporal changes. Therefore, a super-resolution (SR) network can't rely on receiving smoothly changing, filtered inputs that allow for localization of small features. Rather, it has to learn over the course of multiple frames to infer consistent output images (Fig. 1, right) from spatially and temporally aliased input content. As we will demonstrate in a number of studies below, this task is where our proposed depth-recurrent connections unfold their strength. They enable the network to match the data distribution of the targets, i.e., to synthesize images with a high visual quality in terms of detail as well as their temporal behavior. We show and comparisons in the paper, and provide many additional evaluations in the supplemental material 1, where videos more clearly show spatial and temporal differences. Deep learning has been successfully applied to a large variety of image-based super-resolution tasks (; ; . Here, convolutional architectures (CNNs) with residual blocks are a very popular generator architecture that offers training stability as well as high qualify inference. Targeting photo-realism, Generative Adversarial Networks (GAN) were introduced to prevent the undesirable smoothing of direct loss formulations (; . For GAN architectures, a second discriminator network is trained to classify real and generated samples that is used to guide the generator network. The of these GAN approaches were improved upon, e.g., by modifying the residual blocks and perceptual loss function , by employing the Earth Movers distance to stabilize the training , and by accounting for the a-priori knowledge that fake samples exist . Using the feature-space differences of image classification networks, e.g. a pre-trained VGG network or the discriminator in a GAN setting, as perceptual loss was shown to be highly effective as well. Natural temporal changes of the generated content are crucial for video SR tasks. Often multiple subsequent frames are used to generate a high-resolution (HR) frame . These multi-frame approaches benefit from aligning the frames via warping which requires an estimation of the image space motions. As this is usually not readily available for natural videos, optical flow estimation networks are a popular choice. As for spatial content, employing an L 2 loss to enforce temporal coherence is not optimal. Instead, extending the adversarial loss to the temporal domain improves the temporal coherence of small-scale details (Pérez-; . We likewise employ a spatio-temporal GAN architecture in the following. For deep learning methods, a variety of recurrent neural networks has been proposed (; ;), and were shown to be useful for image generation tasks. uses recurrent connections to propagate a latent state over time inside the network, while use the previously generated high-resolution (HR) output as input. The SR task also bears a certain similarity to other image enhancement techniques like inpainting (; a), where progressive methods provide state-of-the-art quality, or the removal of compression artifacts and text . Likewise, temporal coherence is an important aspect in style transfer. We use spatio-temporal self-supervision similar to previous work, but target a substantially more challenging data domain than natural videos in the following. Images of real-time renderers that can be generated quickly often exhibit strong aliasing. In this context, the existing work focuses on enhancing path-traced images. A common application is image denoising, e.g. for Monte-Carlo ray-tracing , where a network learns to predict a noise pattern in order to infer a smooth image from a sparse set of importance samples. Computer games and other real-time graphics applications, on the other hand, use a rasterization-based rendering pipeline. Simple geometry is shaded with simplified but increasingly complex lighting and texturing computations, and often only 1 sample per pixel is taken to maintain the required frame rates. Compared to natural and path-traced synthetic images these images are strongly under-sampled and exhibit aliasing artifacts that are different from the noisy images of path-tracing algorithms. Super-Sampling (SSAA), i.e. rendering at a higher resolution and averaging, is a straight-forward, but expensive anti-aliasing (AA) solution. While Multi-Sampling (MSAA) is a possible optimization, image-space methods (; ; 2012) are popular and try to reduce aliasing by detecting and smoothing edges in image-space after rendering, temporal methods like Temporal Anti-Aliasing (TAA) use the warped previous samples to smooth and stabilize edges. Here, the use of deep leaning models for image enhancement in real-time settings is sparse. To the best of our knowledge, only a closed-source solution exist, provided by Nvidia in the Turing architecture , and little is known about its internal realization. Instead, our goal is to provide an open solution, and at the same time improve the state-of-the-art in GAN-based SR. Our main goal is to facilitate the learning of a stable internal network representation that persists between repeated SR inference steps. In this way, the network can pass information from one frame to the next such that a persistent and stable prediction over time is achieved. This is crucial for processing strongly aliased input data streams. The internal network representation can then be used to generate the desired output distribution and, as demonstrated below, this improves temporal coherence and generates stable small-scale details in SR rendering. Fig. 2 demonstrates the quality that is achieved by the proposed architecture. To enable the network to learn this internal representation, we augment the generator network with aligned, temporal connections inside the residual blocks (ResBlocks). In order to demonstrate the advantages of our depth-recurrent architecture and the extensions for the real-time setting, we build on a GAN-based video super-resolution architecture from previous work. Here, the generator G is trained on image sequences and processes the current LR input as well as its warped previous output through a series of ResBlocks to produce a detailed and temporally coherent output. The spatio-temporal discriminator D s,t sees shorter 3-frame sequences of the output as well as the LR input as condition. We also employ a perceptual feature loss via the discriminator network. In addition to the adversarial and the feature loss, the generator is trained with a Ping-Pong term for long-term coherence and an L 2 loss in image space, the content loss, for stabilization. The residual blocks of frame t−1 and frame t are connected to pass the activations of a convolutional layer from frame t − 1 to t. As we work with video data, we align these activations by warping them with the screen-space motions before they are concatenated to the ones in frame t and passed as input to the next layer. We call this setup Depth-Recurrent Residuals, and it is visualized in Fig. 3, DRR. In contrast to frame-recurrent networks, we employ connections in latent space, and different Figure 2: Our DRR model applied to 2 different test scenes. The comparisons show input LR color images and the high-resolution outputs inferred by our method. The inputs exhibit strong spatial aliasing, and a similar amount of temporal aliasing, as is visible in the videos of the supplemental material. Despite these challenges, our model infers a stable and detailed output sequence. to commonly used "feed-back" loops of recurrent architectures, we use a "feed-forward" design that includes a warping step. Within a multi-layer CNN, a variety of possible recurrent connections are imaginable. The goal of the specific variant we propose is that of temporal stability: with our connections a receiving layer can compare an aligned set of activations from the previous frame with the current activations. Hence, operations such as temporal derivatives are trivial to perform with in-place differences. The operations of a regular ResBlock at depth b can be summarized as follows: two convolutional layers + are followed by an addition, yielding the final output F b,+ = F b,2 + F b−1,+. Here, C <ResBlock>,<layer> denotes a convolution, and + ReLU activation. For our depth-recurrent residual blocks, we replace the second convolutional layer with the following operation: b,1 )). This is followed by a summation as before to produce the output F b,+. Here, W applies warping for the current layer based on externally computed velocities (either with another CNN, or via rendering as for our inputs), and ⊕ denotes feature concatenation. Compared to regular ResBlock convolution, we use 18, 75% fewer features for DRR blocks in order to keep the overall number of weights per ResBlock constant. During our experiments, we apply the following curriculum learning steps, which typically help to stabilize the training runs and lead to improved final in our tests: The depth recurrent connections are linearly faded in during training in a staggered fashion, beginning with fading in the connection of first block, then the second block and so on. Similarly, the discriminator network is faded in after a pre-training step of the generator (details are given in App. B). Figure 3: We propose to connect the center latentspace of ResBlock n at time t − 1 to the same ResBlock n at time t, yielding our DRR connection. A variant of our approach is DRR-in, in addition to a recurrent connections similar to previous work (RDA). Discussion To analyze whether the intuition that the DRR connection support learning stable features is correct, we have evaluated alternative connectivities. One variant from previous work connects the outputs of a ResBlock as inputs into the same ResBlock for the next frame to produce a hidden state. This is inspired by the recurrent denoising autoencoders , and shown as RDA in Fig. 3. Note however, that this version in the following uses our improved architecture and warping operations, both of which are not employed in the original version. In terms of the notation above, this variant is obtained via +. It yields spatially slightly worse but strongly flickering , as the layer now effectively has to compare different feature sets: that of the previous ResBlock, and the features of its own from the previous frame. RDA DRR-in DRR Figure 4: A visual comparison of the different types of recurrent connections with otherwise identical models (all three using our discriminator supervision, generator and warping). Our DRRs yield an improved image quality and temporal stability, as shown in the supplemental material. A variant of our recurrent ResNet architecture is to use an in-place connection instead. Thus, feeding the outputs of the previous ResBlock of both the current and previous frame as input to each ResBlock as shown in Fig. 3 DRR-in. I.e., F +. This variant of our DRR connections stabilizes the , but does typically in slightly lower quality inference. Qualitative for each variant are shown in Fig. 4. We believe that the primary reason for the improved stability of our variant is that the proposed connection is de-coupled from the regular flow of information along the residual blocks. Hence, the learned temporal features change the way the latent-space within the ResBlock is shaped, but it does influence and possibly impede the residuals added at the beginning and end of a ResBlock. Therefore, the network can focus on learning suitable temporal features, instead of additionally having to learn to work with a combination of spatial and temporal information (as with previous work and the variants discussed above). Thus, our approach preserves the structure which led to the success and widespread use of the original ResBlocks . In all three variants, applying the W function, i.e. warping with the screen-space motion vectors, improves the . Otherwise, the networks easily converge towards solutions with noticeable streak-like artifacts. It is also worth noting that the depth recurrent connections are substantially different from the recurrent input to the generator network, i.e., the previously generated HR frame, as they pass on learned latent-space features at the same location within the network. Thus, for new inference tasks the network does not need to analyze again the information encoded in an image. However, it is still challenging for the network to encode and transport coherent small-scale features across frames. Removing the recurrent input, using only our depth recurrent connections leads to temporally coherent , but induces a significant loss of spatial detail. Hence, rather than replacing it, the depth recurrent connections work best in conjunction with a recurrent input. We use images rendered in real-time with a rasterization-based rendering pipeline with 1 sample/pixel as our training data. The network is trained with 15-frame sequences of matching HR-LR image pairs, both rendered using Unity's high-definition rendering pipeline (HDRP). From there, we also capture the rendered screen-space motion that is used for warping. Details are given in App. A. Typical super-resolution models for natural videos are trained with a down-sampled version of the target. This gives the network a smooth, reliable average of the reference content. A generator network can detect the changes and sharpen existing features to generate detail. As our synthetic data exhibit strong aliasing artifacts, there are larger changes between adjacent pixels as well as temporal aliasing. This makes the data much more difficult to analyze. In contrast to a spatio-temporally integrated signal, as for natural video SR, sub-pixel features only show up rarely, and most of the time do not have a signal at all in our setting. This makes it much harder to correlate image space content in subsequent frames and achieve temporal coherence. Aliasing also makes it more difficult to detect the actual, underlying edges over a larger area and correct the input accordingly. These combined challenges make it necessary to introduce more temporal context, which we enable with the DRR connections. In addition to the depth-recurrent residuals described above, our model is also subject to multiple changes which we found crucial for dealing with the challenging input data from real-time renderers. First, we use the motion vectors generated by the renderer for all warping operations. These are faster to compute and more accurate than the motion estimated by an auxiliary network. We also found resize-convolutions with bilinear interpolation instead of (strided) deconvolutions important to reduce checkerboard artifacts. The specific task the network needs to perform in our target application reveals the limits of the deconvolution approach, requiring to incorporate resize-convolutions as described below. Providing depth as LR input to the generator also slightly enhances edges and reduces aliasing artifacts such as staircasing. This effect is stronger near the camera, which suggests that it is related to the non-linear dependency of depth to view-space distance. Introducing additional data fields from the rasterization pipeline, such as surface normals, did not yield improvements. Previous work proposes to let the generator output be a residual that is added to the bicubically upscaled LR input color, instead of directly generating the final output. Interestingly, for our strongly aliased data we found it beneficial to instead use bilinear interpolation for up-scaling the inputs, as bicubic interpolation often leads to overshooting near edges. This caused difficulties for the generator and did not yield any improvements in terms of image quality. However, also the bilinearly up-scaled LR color, to which the residual is applied, still contains strong aliasing. To perform the necessary anti-aliasing, the network has to detect structures based on single samples in the LR input, as it does not have direct access to the up-scaled version in this case, and then perform the necessary correction via the residual addition to the HR version on the other end of the network. This is sufficient for the smoother data of natural videos, where detail enhancement and sharpening operations can be less accurate and still yield good . Replacing the final addition of the residual with 2 additional convolutional layers further improves image quality. For these convolutions, we concatenate the up-scaled color after the resize convolutions (details in App. B). With this modification, the network does not need to transport the LR color through the LR ResBlocks and can choose based on the up-sampled LR color what parts of it to use. This adds ca. 1% additional weights compared to the standard setup, but is computationally more expensive as high-res tensors are convolved. Nonetheless, the gain in image quality outweighed the slight increases in computations and memory. The concatenation of the DRRs increases the number of input channels and, therefore, the number of weights of the receiving layer. Thus, to keep the overall amount of weights consistent with previous versions and allow for a fair comparison, the number of base channels of the ResBlocks is reduced when using these recurrent connections. Discriminator The discriminator facilitates the generation of sharp and detailed images. While a network trained with only L 2 supervision produces a more blurred output, "L 2 only" in Fig. 5, activating the spatio-temporal discriminator yields a significant improvement, see "D s,t ". As visible in Fig. 5, we found it important to use a discriminator without conditional input in order to balance the networks. As a large part of the loss function employs the discriminator (both adversarial and feature losses), its quality is crucial. E.g., by increasing the size, i.e., adding layers or channels, and therefore complexity of the discriminator, further improvements are achieved. Increasing depth values allows D to detect more complex features, thus providing gradients towards more complex image content. This also avoids that the generator quality is limited by the lack of discriminator gradient information. The feature loss similarly profits from more complex features, and while a larger discriminator is slower to train, it does not affect performance during inference. However, a larger discriminator can make the training more unstable, and eventually become unbalanced. We found a discriminator with 14 layers and 3.7M weights (D s,t), and a generator with 26 layers and 769k weights to be a stable combination that yields high quality , as long as adaptive balancing is used as outlined in App. B. Depth-Recurrent Residuals A generator with Depth-Recurrent Residuals (DRRs) gives good , even when trained with the largest discriminator where a generator without DRRs tends to produce strong artifacts, Fig. 5 (D s,t). The generator network can process the detailed gradient feedback it receives from the large discriminator network, and use it to generate a large amount of image-space detail. Letting the DRR connections fade into the network further improves the . When the connections are activated right from the beginning of the training process, the image quality is typically reduced. Loss Functions Despite the stabilization via DRR connections, the proposed learning setup requires careful balancing. The additional details provided by a larger discriminator, combined with the strong learning objective to produce detail, yields overall better but more small-scale differences to the target. This in turn increases the L 2 content loss, which however is needed to stabilize the training. In practice, a parameter is introduced to choose between temporal stability and large amounts of image-space detail. An important source of detail seems to stem from the discriminator feature loss, i.e., by using only the regular adversarial loss, details are noticeably reduced. In practice, a strong feature loss effectively reduces the smoothing caused by the content loss and gives the best . Similar to a spatial L 2 loss, we found that a temporal L 1 loss, as employed by , leads to an undesirable smoothing in our GAN setting. Even worse, when using a temporal L 1 loss without warping, it produces strong streak-like artifacts. Modifications such as additional edge loss terms did not alleviate this behavior. While others have reported improvements by using perceptual losses with pre-trained networks (such as VGG), we likewise found that this yields different, but not necessarily better outputs. Presumably, the different nature of the images produced by the rasterization pipelines is less amenable to the VGG features. The loss variants used in previous works are shown in the bottom row of Fig. 5. We compare our model with and without DRR connections to state-of-the-art methods from previous works. Specifically, we compare to supervised approaches (DUF , Enet , FRVSR ), and TecoGAN as a GAN-based alternative. As shown by the qualitative comparison in Fig. 6 and the supplemental videos, all existing approaches have difficulties to stabilize the input data and produce strong artifacts. We also re-trained a TecoGAN model with our data set, which also leads to undesirable . Standard metrics for image quality often fail to relate to the image quality perceived by humans. PSNR values fluctuate during training and across and in practice have very limited significance. We thus use perceptual metrics, measuring LPIPS for perceptual similarity to single target images, i.e. LPIPS(G(x t), y t ) (b). As measure for temporal coherence we evaluate tLP = |LPIPS(G(x t−1), G(x t)) − LPIPS(y t−1, y t)| common problem that the temporal coherence of blurry data is generally better, as can be seen from "L 2 ". Thus, it should be viewed in conjunction with the spatial LPIPS for assessing the overall video quality. For completeness, we also compute the in-place metric T-diff = ||W (G(x t−1)) − G(x t)|| 2 ), using the captured HR motion for increased accuracy. The are given in table 1, and confirm the qualitative comparisons so far: our method yields the best temporal coherence in terms of tLP score. In addition, our generators with warped connections all yield excellent LPIPS scores around 2.7 (last three columns), but the DRR connections are crucial to obtain good temporal changes in addition to the details. The DRR tLP score is only surpassed by L 2, which, however, yields clearly sub-optimal image details. As we focus on real-time rendering as our use case scenario, ideally the performance of the inference step needs to surpass the performance of the renderer. For a desired output resolution of 1920×1080, our pre-trained model takes 113ms per frame on average. 2 Although this is not yet fast enough for real-time applications, we expect that techniques such as network compression and evaluation of the models with dedicated hardware will easily yield very significant performance improvements. We have demonstrated how depth-recurrent residual connections can be leveraged to learn stable internal latent-space representations in conditional generator architectures. The DRR connections are particularly promising for iterative models with strongly aliased data, such as low-resolution inputs from a real-time renderer. We have additionally shown how to achieve high quality synthesis in the context of real-time rendering by carefully analyzing and adjusting the network architecture. We anticipate that DRRs could be beneficial for a variety of other tasks such as object tracking and physics predictions . A DATA As source of our data we use the projects "FPS Sample" and "Book of the Dead: Environment" (b;a) for the Unity engine, both use the HDRP. We captured a total of 57 120-frame sequences, split 50-5-2 for training, validation and testing. For each frame we have lit color (the final image), unlit diffuse color, view-space surface normals, roughness, screen-space motion and depth for both HR and LR. This data is easy to acquire as it can be inferred from the scene, geometry and materials and is rendered by default in Unity's HDRP. However, the use of unlit color, normals or roughness had no tangible effects during our tests. Most post-processing effect have been turned off, but the HR color is augmented with TAA. HR is rendered and captured at a resolution of 512 × 512, LR at 128 × 128. The details of our network architecture are given in figures 8 and 9. Our generator network represents a modified TecoGAN generator, and a lager version of TecoGAN's discriminator is likewise used. A typical training takes 400,000 iterations (ca. 287 epochs). All discriminator related losses, the loss for the discriminator itself as well as the adversarial and feature loss to the generator, are linearly faded in over the first 40k iterations starting from 0. When training with DRR the temporal connections are faded in later during training in a staggered fashion, beginning at 60k iterations with the first block and 8k iterations between each. Every block takes 10k iterations to fully fade in. We also use an exponential learning rate decay during the last 150k iterations, decaying to 65% every 30k iterations. The most dominant losses are the L 2, the feature loss and the ping-pong loss. To balance the adversarial and feature loss during training, the discriminator is only trained if it is not too strong to keep it from overwhelming the generator. We train the discriminator only if the EMA of the sum of b disc = − log(D(y))− logD(G(x)), is below a threshold. For our setting, 0.4 yields stable GAN training runs. For our final DRR version we use use following weights for the terms of our loss function. L2 content loss: 1.0, ping-pong loss: 0.5, feature loss: 3.6, and adversarial loss: 0.1. In addition, we use the following hyper-parameter settings. discriminator learning rate: 3.5e-5, discriminator balancing threshold: 0.4, learning rate: 5e-5, adam beta: 0.9. Figure 8: Our modified generator: in addition to the LR color and frame-recurrent input (generator output of the previous frame encoded in 3 * 16 channel) we also add the LR depth. After an initial convolution most work is done in the 10 sequential ResBlocks (same as TecoGAN, 9 omitted for visibility). The latent image is then scaled to output resolution by 2 resize convolutions and fine-tuned by another 3 convolutions after the bilinear interpolated LR color is appended. All convolutions have 3 × 3 kernel size and ReLU or no activation. To highlight the range of content our trained model can produce, we show additional sequences in terms of their low-resolution input and inferred output in figures 10 and 11. In order to fully assess the quality, please check the supplemental material document, which contains animated sequences. | A method for persistent latent states in ResBlocks demonstrated for super-resolution of alised image sequences. | 1,212 | scitldr |
The backpropagation algorithm is the most popular algorithm training neural networks nowadays. However, it suffers from the forward locking, backward locking and update locking problems, especially when a neural network is so large that its layers are distributed across multiple devices. Existing solutions either can only handle one locking problem or lead to severe accuracy loss or memory inefficiency. Moreover, none of them consider the straggler problem among devices. In this paper, we propose \textbf{Layer-wise Staleness} and a novel efficient training algorithm, \textbf{Diversely Stale Parameters} (DSP), which can address all these challenges without loss of accuracy nor memory issue. We also analyze the convergence of DSP with two popular gradient-based methods and prove that both of them are guaranteed to converge to critical points for non-convex problems. Finally, extensive experimental on training deep convolutional neural networks demonstrate that our proposed DSP algorithm can achieve significant training speedup with stronger robustness and better generalization than compared methods. | We propose Diversely Stale Parameters to break lockings of the backpropoagation algorithm and train a CNN in parallel. | 1,213 | scitldr |
The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents. If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly. That is seen as an important step towards achieving general AI. The scope of emergent communication is so far, however, still limited. It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity. We took an example from human language acquisition and the importance of the empathic connection in this process. We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning. We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener's mind change improving the learning time. Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup. Natural language is not as rule-based as researchers in supervised language learning would prefer. There are limitless context-dependent notions to it, and flexible language use is considered as a necessary aspect of general AI. Originally, natural language emerged through a necessity to achieve successful coordination. Hence, a general AI would need to understand the functional aspects of language and learn communication through interaction . These considerations led to the research field of emergent communication and the attempt to ground natural language through reinforcement learning. Deep reinforcement learning has achieved some impressive over the last years . One of its principal aspects is the ability to extract features from high dimensional input data without manual preprocessing. This capability is especially useful if the necessary representation is unknown to the designer. Classical deep reinforcement learning approaches rely on a large number of training examples, mainly because the sparse reward hardly provides enough feedback to shape the deep layers. These deep layers are responsible for the embedding of input data into a meaningful representation. Therefore, it takes many training steps before a useful representation emerges; if it converges at all. According to the theory of the predictive mind , the human brain generates richer feedback through learning several unsupervised prediction tasks while training on the main task. The purpose of these predictions is to produce more and more expressive models and representations of the world. achieved a far more expressive representation of their visual inputs by learning an auxiliary prediction task. The sole purpose of the auxiliary net is to predict the change in the visual input given the last movement action. Training this net does not directly affect the original task, but it refines the visual representation to reflect the concepts of a 3D world. used predictive tasks to ground natural language, but only focused on better understanding an existent language. We transfer the auxiliary prediction to the task of active communication. This goes along with the theory of mind stating that an essential part of intelligence in interaction emerges through predicting the mental state of the interaction partner. We let the speaker train an auxiliary net that tries to predict how the speaker's utterance will change the listener's hidden state. That resembles humans empathetic way of understanding what a message will do to the listener. We assume this leads to a more communication effective representation of the sensory input; in other words, the input encoding becomes more communicatable. The effect is visible in the essential acceleration of learning successes in developing a shared language. Our main contribution is an elegant extension to multi-agent deep reinforcement learning (MADRL) algorithms aiming to emerge a communication. It resembles an empathic connection between speaker and listener, which leads to faster convergence to a shared language. We doubled the learning speed of a MADRL algorithm playing a referential game by introducing this auxiliary prediction task to the speaking agent. We attribute the improvement to the richer gradients in the lower layers of the neural network to embed the input. Reinforcement Learning (RL) An agent in a reinforcement learning setting can fully or partially observe its current state s ∈ S and is able to choose an action a ∈ A through a policy π(s) = a. The chosen action will lead to receiving a reward R. The agent's goal in its environment is to maximize the expected reward . RL with neural networks (NN) Using neural networks as a policy representation for reinforcement learning has the benefit of being able to represent any policy function and the downside of needing a huge number of data samples to learn. In our case, the policy outputs a direct probability for taking each action. Such policies can be updated by using Policy Gradient methods . The policy parameters θ, in this case, the parameters of the neural net, are updated according to their effect on the objective J with a learning rate β: Using the REINFORCE algorithm the effect on the objective can be estimated as the following: Long Short-Term Memory Network (LSTM) Recurrent neural networks (RNN) can accumulate input in an internal representation over time as well as produce a consistent output over several time steps from it. LSTMs are RNNS that are specifically created to remember information over an extended period of steps . Auxiliary tasks in RL Auxiliary unsupervised tasks were introduced into RL with NNs by. They proposed an architecture that predicts the next visual input, given the internal representation of the last visual inputs and the last taken action. The unsupervised task of correctly predicting the next visual input leads to better performance on the main task, which was playing an atari game. They assume that the auxiliary task enforces a more expressive internal representation of the visual input, which then aids the main task. transferred this auxiliary task to natural language acquisition by predicting the next word spoken by an instructor. 3 started the field of learning communication in artificial agents with the aim to research the mechanisms by which language and communication emerge. contributed by using classical genetic scenarios, where "male" agents had to find "female" agents based on signals they emitted. They extended their setting in 1993 to include predator and prey agents and showed that known prey strategies as herding emerge if the agents have the possibility to communicate . achieved the emergence of more robust signals by introducing lying agents (parasites) in this setting. The successive advances in the field of learning communication can be assigned to the progress of learning algorithms for neural networks for a big part. started using Q-learning on the pursuit problem, including learned communication, but also follow up work only reached simple information sharing about the prey position . The field was then alleviated to a new level of complexity by and. They transferred the new progress in deep learning to multi-agent coordination to emerge even more complex communication patterns. In previous work, we have shown that these algorithms can be further improved even to solve tasks that lie outside the communication range [blind]. Though many multi-agent learning setups use communication as a matter of success, only some focus on the emerging protocols and their properties . From those focusing on the emergence of communication or language, a significant number of publications used referential games as a testbed (; ; ; ; ;). Speaker Listener cat lives legs fluffy car cat bird sofa Figure 1: Illustration of the referential game used as a testbed. The speaker agent takes the concept "cat" as input and forms a message to describe it. The listener agent takes the message and several candidate concepts as input, and decides which concept is the target seen by the speaker. Especially interesting in that context is the work of , as they could vividly show, that this approach to language emergence can lead to a flexible language use which could be understood by humans even when applied to objects unknown to the algorithm, yet. We introduce the idea of auxiliary tasks into the field of language emergence. The speaking agent is equipped with an auxiliary single-layer perceptron, to predict the hidden state of the listener agent, after this ultimately encoded the message. The input for this prediction is the hidden state of the speaker right before it starts forming the message. The aim is to achieve a high relation between the hidden states of both agents. This signifies the speaker can communicate its means well. We state that in this application, the auxiliary prediction resembles empathy in humans, as the speaker tries to predict how its utterance will affect the listener's mindset. The prediction task is unsupervised and can be trained on the same samples and at the same time as the main task. Training the main RL task automatically generates the samples for the unsupervised task. The gradients can be backpropagated into the encoding layers of the speaker, where they are added to the gradients of the RL task and optimized together. With our approach, we further enhance the possibilities in language emergence by providing richer feedback to form the internal communicatable representation in the speaking agent. We provide experimental evidence that these extensions can lead to a doubled learning speed when added to an existing approach to language emergence in referential games. To test the potential of the auxiliary prediction task, we used the referential game setup proposed by shown in Fig. 1. Out of the existing implementations, we chose this one because the setup has proven to converge to an emergent communication at a relatively low computational cost. Dataset We use the Visual Attributes for Concepts Dataset (VisA) of. It contains attribute annotations for 500 concrete concepts (like cat or sofa), annotated with 636 general attributes (like is black or made of wood). The annotations are human-made and therefore carry an inherent structure that can be seen as disentangled. Agent Setup A speaker agent gets shown a target concept t that is realized as a binary vector with as many entries as possible attributes. The speaker then uses a policy π S to produce a message m out of an alphabet of discrete symbols (numbers 1 to 100 in our case). The message is then interpreted by a listener agent that observes several candidate concepts C at the same time. The listener uses a pointing policy π L to decide, which of the candidate concepts the speaker agent is describing. Both agents receive a shared reward R if the listener correctly identifies the described concept. The speaker agent consists of a single encoding layer to encode the input vector into a dense representation h S and an LSTM to produce a message out of h S. The listener agent encodes the message with an embedding layer and an LSTM into a dense representation h L. The listener contains an encoding layer as well, which it applies to every candidate concept respectively to generate a set of representations. It calculates the compliance between message and candidate concepts with the dot product between the message representation and the concept representation. The is treated as a Gibbs Distribution. Both policies π S and π L output a probability distribution over all possible actions. For the speaker, the possible actions are the elements of the alphabet, once for every symbol over the length of the message. For the listener, the actions consist of choosing each of the candidate concepts in C. For more details see. Learning As part of the reinforcement learning setting, the agents try to maximize the expected reward. They do not share any parameters but try to maximize the probability of their action that ed in a positive reward, respectively. Therefore together, they maximize the objective function in each training instance: Empathy Extension To generate richer gradients for shaping the deep encoding layers of the speaker, we assign an auxiliary unsupervised prediction task to it. We add a single layer MultiLayer-Perceptron (MLP) to the graph, which predicts the activation of the listener's hidden layer h L after hearing the full message. The input is the activation of the speaker's hidden layer h S before starting the utterance. That corresponds to predicting the effect of the to-be-made sentence on the mindset of the listener. We use the mean absolute error as the loss function for the prediction task: where α is a weighting factor that ensures that the unsupervised task does not corrupt the main reinforcement learning task. An α close to 1 would mean, that effectively manipulating the listener's mind is as important to the speaker as communicating the target concept. w θ is the linear transformation through the MLP with sigmoid activation function σ. The gradients of the unsupervised task are calculated on the same trial and added to the gradients of the reinforcement task. Hence, no additional training steps are necessary. The optimization then uses the summed up gradient. We implemented the setup using the EGG toolkit Kharitonov et al.. We found that with an α of around 0.1, i.e. weighting the prediction gradients 10% compared to the main task, we can increase the learning speed to double or triple. To be comparable, we used the same initialization and sampling seeds on both options. Good or bad initialization can make up for half the learning speed, but the relative learning speed improvement through using the prediction task stays consistent over different initializations. In Fig. 3 we compared the learning curves with and without the prediction task. For a game setup with two candidate concepts and a maximum message length of two, all marks are reached in half the time. For a more complex game setup with five candidate concepts and a maximum message length of five, some marks are even reached in a third of the time, when using the prediction task. Using an auxiliary predictive task on a communication learning task has proven auspicious. Sampleefficiency is highly desirable when acquiring language, so the fact that our auxiliary task doubles the learning speed is of high significance. Our experiments do only feature a small partition of the potential of this elegant mechanism, yet. Higher sample-efficiency at no computational cost now allows acquiring more complicated language tasks, that were previously impossible to learn in a reasonable time. We plan to apply our algorithm to much more challenging tasks in the future. We did, for example, only test disentangled input due to computational limitations. The mechanism would be even more useful when applied on entangled input because developing an expressive representation is then of higher importance. For future research, we propose the use of an auxiliary prediction task for the listener to align with the word usage of the speaker, as well. We hope that this simple but powerful mechanism brings the field of language emergence a big step forward. | An auxiliary prediction task can speed up learning in language emergence setups. | 1,214 | scitldr |
Image paragraph captioning is the task of automatically generating multiple sentences for describing images in grain-fined and coherent text. Existing typical deep learning-based models for image captioning consist of an image encoder to extract visual features and a language model decoder, which has shown promising in single high-level sentence generation. However, only the word-level scalar guiding signal is available when the image encoder is optimized to extract visual features. The inconsistency between the parallel extraction of visual features and sequential text supervision limits its success when the length of the generated text is long (more than 50 words). In this paper, we propose a new module, called the Text Embedding Bank (TEB) module, to address the problem for image paragraph captioning. This module uses the paragraph vector model to learn fixed-length feature representations from a variable-length paragraph. We refer to the fixed-length feature as the TEB. This TEB module plays two roles to benefit paragraph captioning performance. First, it acts as a form of global and coherent deep supervision to regularize visual feature extraction in the image encoder. Second, it acts as a distributed memory to provide features of the whole paragraph to the language model, which alleviating the long-term dependency problem. Adding this module to two existing state-of-the-art methods achieves a new state-of-the-art by a large margin on the paragraph captioning Visual Genome dataset. Automatically generating a natural language description for visual content like image or video is an emerging interdisciplinary task. This task involves computer vision, natural language processing and artificial intelligence. Thanks to the advent of large datasets;; Krishna et al. (2017b), many recent works; have shown promising in generating a single high-level scene for images and videos. However, the coarse, scene-level descriptions that these models produce cannot meet real-world applications such as video retrieval, automatic medical report generation;; Li et al. (2018a), blind navigation and automatic video subtitling which capture fine-grained entities and have a coherent and logically detailed description. To tackle this challenge, a relatively new task called paragraph captioning is emerging. Paragraph captioning is the task of generating coherent and logically detailed descriptions by capturing the fine-grained entities of the image or video. A few works;; have pushed the performance to new heights with the main paragraph captioning dataset, the Visual Genome corpus, a dataset introduced by. Compared with the performance of single-sentence caption generating models, the performance paragraph-length caption generating models is lower by a large margin. Paragraph captioning for images and videos is challenging due to the requirement of both fine-grained image understanding and long-term language reasoning. To overcome these challenges, we propose the TEB module, a module that is easy to integrate with existing image captioning models. This module maps variedlength paragraphs to a fixed-length vector which we call TEB. Each unique vector in the TEB has distance meaning and indexed by the order of the word in the vocabulary. The TEB has a distributed memory. This is illustrated in detail in section 3. Existing deep learning based models typically consist of an image encoder to extract visual features in parallel with a RNN language model decoder to generate the sentences word by word sequentially. In the training stage, only a tiny partial scalar guiding information from the word level loss is available to optimize the image encoding training. This in an insufficient fine-grained and coherent image visual feature extraction. The TEB module, which holds the whole paragraph in a distributed memory model, can provide global supervision to better regularize the image encoder in the training stage. The RNNs are known to have a long-term dependency problem because of vanishing and exploding gradients which make it unable to meet long-term language reasoning. Since the TEB module has distributed memory and can provide ordering, it is better with long-term language reasoning. We integrated our TEB module with the state-of-the-art methods on the only available paragraph captioning dataset, the Visual Genome corpus, and achieved new state-of-the-art by a large margin. This image to text problem is a classic problem in computer vision and NLP. The first work to use deep neural networks to solve this problem was the Neural Image Caption (NIC) in , which uses a pre-trained CNN as the visual model and a RNN as the language model. The visual model extracts visual features which are fed to the first time step of the RNN. The language model takes visual features from the visual model at the first time step and predicts the first word, before feeding the predicted word into the next time step and so on. At each time step, the difference between the predicted word and the ground truth word is optimized by softmax with cross entropy loss. This work can only predict one short simple sentence for each natural image. The performance of this one sentence caption task is improved in by introducing an attention mechanism which focuses on related regions when generating a word per time step in the RNN model. In order to give a description for every object in an image, proposed a fully convolutional localization network which upgraded the region proposal network from to localize the salient regions. The RNN model then takes the corresponding visual features for each localized region to generate a sentence. However, simply joining all of the generated sentences together doesn't produce a coherent paragraph as there are semantic relationships between sentences, which is a shortcoming of DenseCap. Similarly, dense video captioning, a task which gives each event a description in a video, was first explored in Krishna et al. (2017a) by a variant of the existing proposal module and using 3D features. Later it was further improved by jointly localizing and describing eventsLi et al. (2018b). Recently, the RNN/LSTM language model was replaced by a CNN in; with comparable performance and the potential for parallel computing, which is a drawback of sequential models. In the inference process, however, this CNN model also need to be computed sequentially. Since computation cost is a big issue for video captioning, introduced a new method to find the useful frames which cut redundant information and reduce computation cost. Standard captioning generates single high-level sentence. Dense captioning generates a description for each salient object in an incoherent way. Paragraph captioning, however, overcomes the weaknesses of the previous two tasks by generating fine-grained and coherent natural language descriptions, like a story. To meet long-term language reasoning and the requirement of multiple topics in multiple sentences, a hierarchical recurrent neural network architectureLi et al.;;;; is widely used in paragraph captioning. For example, generate multiple sentences for video captioning by capturing strong temporal dependencies. uses a hierarchical recurrent network to build relationships between sentences. Regional features are passed to a sentence RNN to generate topic vectors with a halting distribution to control the ending of new topic generation. The generated topic vectors are then consumed by a word RNN to generate sentences. In this way, this hierarchical RNN and DenseCap offer two ways of generating new topics, which is essential for multiple sentence generation. The IU Chest X-ray dataset is used for automatic report generation on this unstructured reportJing et al. by using co-attention and the hierarchical LSTM. The Diversity model improves sentence diversity by introducing a repetitive penalty in the sequence-level training. However, all of these methods suffer from the fact that only a tiny partial scalar from the word level loss can be used as guiding information to optimize the image encoding in training. Our TEB module can overcome this and provides an alternative for the hierarchical recurrent neural network architecture. With our TEB module, only one level recurrent neural network is enough to generate multiple sentences with multiple find-grained topics. GANs have proved to improve real text generation in. is proposed to deal with the sequential and discrete property of text for text generation. solves the sparse signal from generator problem by leaking feature from the generator to the discriminator for long sentence generation. introduced a way to fill in the blank with GAN. use long-term feature banks for detailed video understanding. The proposed TEB module improves paragraph captioning by describing the rich content of a given image. Figure 1 shows an example of how the TEB module can be integrated with an existing typical image captioning pipeline. The paragraph vector is based on word vectors. A word vector is the concept of using a distributed vector representation of words. The basic idea is to predict a word given the other words in a context. The framework is shown in Figure 2. In this framework, each word is mapped to a unique vector which is a column of a matrix W. The column is indexed by the order of the word in the vocabulary. The features to predict the next word are the sum or concatenation of the vectors. To express this in a mathematical equation, let w 1, w 2, w 3,..., w T represent the vectors of a sequence of training words. The objective function of the framework is to maximize the average log probability Typically, a multi-class classifier such as softmax is used for the prediction task. So, we have where y i is the un-normalized log-probability for each output word i, which is computed as This framework is implemented in a neural network and trained using stochastic gradient descent through back-propagation. This type of model is the well known neural language model Bengio et al.. Compared to existing image captioning models, which only using recurrent neural networks, after training converges, this framework can map words with similar meaning to a similar position in the vector space. For example, "wind" and "beautiful" are far away from each other in the vector space, while "beautiful" and "pretty" are closer. Additionally, the distance between each unique word vector also carries meaning. This means that it can be used for analogy questions answering in a simple vector algebra manipulation: "waiter" -"man" + "women" = "waitress". This makes it easy to learn a linear matrix, such as a fully connected layer, to translate between visual features and these word vectors. Inspired by the word vector framework which can capture the semantics as a of a prediction task, The paragraph vector also contributes the prediction of the next word. In this paragraph vector framework (See Figure 3), similarly to the word vector framework, each word is still mapped to a unique vector which is a column of a matrix W, while each paragraph is mapped to a unique vector which is a column of a matrix D. Then both the word vector and paragraph vector are fused (either sum or concatenated) as features to predict the next word. We use concatenation in our implementation. The paragraph vector can be treated as a super word (or the topic of the paragraph) which acts as memory of the missing information from the current context. Hence, this framework is known as a distributed memory model. This property can compensate the recurrent neural network for its lack of generating logical connections between sentences or paragraphs. Figure 1: Integration of the paragraph vector framework as a TEB module to an existing deep learning based image captioning model. There are three interconnected components divided into three dashed rectangular boxes. In the green box on the top left, the image encoder extracts visual features through a CNN model. In the yellow box on the bottom, a RNN based language model decoder is used to generate paragraphs. Existing deep learning based models only contain these two components. The red box on the top right box is the TEB module: In the training stage, for an image, paragraph pair, the varied-length paragraph is mapped to a fixed-length vector which is called TEB through the paragraph vector framework. The visual features from the image encoder are converted to the predicted TEB (called TEB') through several fully connected layers. The TEB' is supervised by the TEB through an L1 loss, which acts as global deep supervision to regularize the visual feature extraction for the image encoder. The visual features and TEB' are concatenated and feed into the RNN as input. The generated paragraph is supervised by the ground truth paragraph through a word-level loss. In the inference stage, the TEB is not available and the TEB' acts as the TEB to provide the features of the whole paragraph to alleviate the long-term dependency problem for the language model. The integration of the paragraph vector as a TEB module for image paragraph captioning is illustrated in Figure 1.. The hyperparameters are as follows: The vector size (TEB size) is 512, the sliding window size is 50, the sampling threshold is 1e − 5, the negative size is 5. The paragraph vector model is trained for 1000 epochs before performing the inference to generate the TEB. Regardless of the dimension size of the visual features from the image encoder, the visual features are converted to the same dimension of the TEB by several fully connected layers. In the concatenation of the TEB' and visual features, a weight of 0.1 is applied to the TEB'. We integrate our TEB module with the Diversity model as its backbone. Self-critical sequence training (SCST) and repetitive training are also used. We also integrate the TEB module with a transformer model. The transformer model is adapted from the Bottom-Up and Top-Down model with the following modification: The LSTM-based language model is replaced by the transformer modelVaswani et al.. We used both cross-entropy and SCST training, without the repetition penalty, and beam search instead o greedy search. 5 Table 1 shows the quantitative . We have three models. The "Diversity + TEB" model is the Diversity model Melas-Kyriazi et al. with SCST trainingRennie et al. (2017, repetition penalty and TEB module. The "Transformer" model is Replacing the LSTM model with in the Bottom-UP and Top-Down modelAnderson et al.. The "Transformer + TEB" is the "Transformer" model with TEB module. The TEB module improve both baseline model by a large margin and our model "Diversity + TEB' achieve state of the art on the visual genome . Figure 2 shows the qualitative between A man is standing in a white basket. He is wearing a black shirt and a black hat. The man is holding a hot dog in his hand. There is a man in a black shirt standing behind the man. There is a man. The man is wearing a yellow shirt. The man is standing at a park. There are more people in the park. There are people sitting under the threes. There are people walking in the paths. The man is holding a sandwich. The sandwich is a hot dog. The sandwich has a sausage. The sandwich has onions. A large elephant is standing in the grass. The elephant is a baby elephant. The elephant has a long trunk. The baby elephant is walking. The grass is green. The elephants are standing on the grass. The water is calm. The elephants are white. The tusks are white. They are a few trees in the water. A large elephant is standing in the water. The elephant is walking in the water. There is a large body of water behind the elephant. There are a small rock behind the elephant. The elephant stands in the grass. The elephant is small. The elephant has tusks. The tusks are little. The elephant is grey. The elephant is standing by the water. The water is like a river. The grass is mostly yellow. There is a hill on the other side of the water. The water is still. there is wood in the water. The grass is short. There are trees on the other side of the river. A white toilet is in the bathroom. The toilet is white. The lid is white. There is a white toilet in the toilet. The toilet lid is up. The floor is made of white. The tiles are white. There is a white wall behind the toilet. A white toilet is sitting on the ground. There is a white toilet in the toilet. There is a toilet in front of the toilet. The toilet lid is up. The toilet bowl is cleaning. The toilet is a very light beige color. There's a white bar between the toilet lid and the toilet seat. The toilet is encased in a cubby space. The water in the toilet is low. The floor around the toilet is made of tiles. There are wires on the bottom left side of the toilet bowl. In this paper, we propose the Text Embedding Bank (TEB) module for visual paragraph captioning, a task which requires capturing fine-grained entities in the image to generate a detailed and coherent paragraph, like a story. Our TEB module provides global and parallel deep supervision and distributed memory for find-grained image understanding and long-term language reasoning. Integrating the TEB module to existing state-of-the-art methods achieves new state-of-the-art by a large margin. | TEB Module for IPC | 1,215 | scitldr |
Learning Mahalanobis metric spaces is an important problem that has found numerous applications. Several algorithms have been designed for this problem, including Information Theoretic Metric Learning (ITML) [Davis et al. 2007] and Large Margin Nearest Neighbor (LMNN) classification [Weinberger and Saul 2009]. We consider a formulation of Mahalanobis metric learning as an optimization problem,where the objective is to minimize the number of violated similarity/dissimilarity constraints. We show that for any fixed ambient dimension, there exists a fully polynomial time approximation scheme (FPTAS) with nearly-linear running time. This is obtained using tools from the theory of linear programming in low dimensions. We also discuss improvements of the algorithm in practice, and present experimental on synthetic and real-world data sets. Our algorithm is fully parallelizable and performs favorably in the presence of adversarial noise. Learning metric spaces is a fundamental computational primitive that has found numerous applications and has received significant attention in the literature. We refer the reader to; for detailed exposition and discussion of previous work. At the high level, the input to a metric learning problem consists of some universe of objects X, together with some similarity information on subsets of these objects. Here, we focus on pairwise similarity and dissimilarity constraints. Specifically, we are given S, D Ă`X 2˘, which are sets of pairs of objects that are labeled as similar and dissimilar respectively. We are also given some u, ą 0, and we seek to find a mapping f: X Ñ Y, into some target metric space pY, ρq, such that for all x, y P S, ρpf pxq, f pyqq ď u, and for all x, y P D, ρpf pxq, f pyqq ě. In the case of Mahalanobis metric learning, we have X Ă R d, with |X| " n, for some d P N, and the mapping f: R d Ñ R d is linear. Specifically, we seek to find a matrix G P R dˆd, such that for all tp, qu P S, we have and for all tp, qu P D, we have 1.1 OUR CONTRIBUTION In general, there might not exist any G that satisfies all constraints of type 1 and 2. We are thus interested in finding a solution that minimizes the fraction of violated constraints, which corresponds to maximizing the accuracy of the mapping. We develop a p1`εq-approximation algorithm for optimization problem of computing a Mahalanobis metric space of maximum accuracy, that runs in near-linear time for any fixed ambient dimension d P N. This algorithm is obtained using tools from geometric approximation algorithms and the theory of linear programming in small dimension. The following summarizes our . Theorem 1.1. For any d P N, ε ą 0, there exists a randomized algorithm for learning d-dimensional Mahalanobis metric spaces, which given an instance that admits a mapping with accuracy r˚, computes a mapping with accuracy at least r˚´ε, in time d Op1q nplog n{εq Opdq, with high probability. The above algorithm can be extended to handle various forms of regularization. We also propose several modifications of our algorithm that lead to significant performance improvements in practice. The final algorithm is evaluated experimentally on both synthetic and real-world data sets, and is compared against the currently best-known algorithms for the problem. Several algorithms for learning Mahalanobis metric spaces have been proposed. Notable examples include the SDP based algorithm of , the algorithm of Globerson and Roweis for the fully supervised setting , Information Theoretic Metric Learning (ITML) by , which casts the problem as a particular optimization minimizing LogDet divergence, as well as Large Margin Nearest Neighbor (LMNN) by , which attempts to learn a metric geared towards optimizing k-NN classification. We refer the reader to the surveys; for a detailed discussion of previous work. Our algorithm differs from previous approaches in that it seeks to directly minimize the number of violated pairwise distance constraints, which is a highly non-convex objective, without resorting to a convex relaxation of the corresponding optimization problem. The rest of the paper is organized as follows. Section 2 describes the main algorithm and the proof of Theorem 1.1. Section 3 discusses practical improvements used in the implementation of the algorithm. Section 4 presents the experimental evaluation. In this Section we present an approximation scheme for Mahalanobis metric learning in d-dimensional Euclidean space, with nearly-linear running time. We begin by recalling some prior on the class of LP-type problems, which generalizes linear programming. We then show that linear metric learning can be cast as an LP-type problem. Let us recall the definition of an LP-type problem. Let H be a set of constraints, and let w: 2 H Ñ R Y t´8,`8u, such that for any G Ă H, wpGq is the value of the optimal solution of the instance defined by G. We say that pH, wq defines an LP-type problem if the following axioms hold: (A1) Monotonicity. For any F Ď G Ď H, we have wpF q ď wpGq. (A2) Locality. For any F Ď G Ď H, with´8 ă wpF q " wpGq, and any h P H, if wpGq ă wpG Y thuq, then wpF q ă wpF Y thuq. More generally, we say that pH, wq defines an LP-type problem on some H 1 Ď H, when conditions (A1) and (A2) hold for all F Ď G Ď H 1. A subset B Ď H is called a basis if wpBq ą´8 and wpB 1 q ă wpBq for any proper subset B 1 Ĺ B. A basic operation is defined to be one of the following: (B0) Initial basis computation. Given some G Ď H, compute any basis for G. (B1) Violation test. For some h P H and some basis B Ď H, test whether wpB Y thuq ą wpBq (in other words, whether B violates h). (B2) Basis computation. For some h P H and some basis B Ď H, compute a basis of B Y thu. We now show that learning Mahalanobis metric spaces can be expressed as an LP-type problem. We first note that we can rewrite and as and where A " G T G is positive semidefinite. We define H " t0, 1uˆ`R d 2˘, where for each p0, tp, quq P H, we have a constraint of type, and for every p1, tp, quq P H, we have a constraint of type. Therefore, for any set of constraints F Ď H, we may associate the set of feasible solutions for F with the set A F of all positive semidefinite matrices A P R nˆn, satisfying and for all constraints in F. Let w: 2 H Ñ R, such that for all F P H, we have where r P R d is a vector chosen uniformly at random from the unit sphere from some rotationallyinvariant probability measure. Such a vector can be chosen, for example, by first choosing some r 1 P R d, where each coordinate is sampled from the normal distribution N p0, 1q, and setting r " r 1 {}r 1 } 2. Lemma 2.1. When w is chosen as above, the pair pH, wq defines an LP-type problem of combinatorial dimension Opd 2 q, with probability 1. Moreover, for any n ą 0, if each r i is chosen using Ωplog nq bits of precision, then for each F Ď H, with n " |F |, the assertion holds with high probability. Proof. Since adding constraints to a feasible instance can only make it infeasible, it follows that w satisfies the monotonicity axiom (A1). We next argue that the locality axion (A2) also holds, with high probability. Let F Ď G Ď H, with 8 ă wpF q " wpGq, and let h P H, with wpGq ă wpG Y thuq. Let A F P A F and A G P A G be some (not necessarily unique) infimizers of wpAq, when A ranges in A F and A G respectively. The set A F, viewed as a convex subset of R d 2, is the intersection of the SDP cone with n half-spaces, and thus A F has at most n facets. There are at least two distinct infimizers for wpA G q, when A G P A G, only when the randomly chosen vector r is orthogonal to a certain direction, which occurs with probability 0. When each entry of r is chosen with c log n bits of precision, the probability that r is orthogonal to any single hyperplane is at most 2´c log n " n´c; the assertion follows by a union bound over n facets. This establishes that axiom (A2) holds with high probability. It remains to bound the combinatorial dimension, κ. Let F Ď H be a set of constraints. For each A P A F, define the ellipsoid For any A, A 1 P A F, with E A " E A 1, and Therefore in order to specify a linear transformation G, up to an isometry, it suffices to specify the ellipsoid E A. Each tp, qu P S corresponds to the constraint that the point pp´qq{u must lie in E A . Similarly each tp, qu P D corresponds to the constraint that the point pp´qq{ must lie either on the boundary or the exterior of E A . Any ellipsoid in R d is uniquely determined by specifying at most pd`3qd{2 " Opd 2 q distinct points on its boundary (see ;). Therefore, each optimal solution can be uniquely specified as the intersection of at most Opd 2 q constraints, and thus the combinatorial dimension is Opd 2 q. The basis computation step (B2) can be performed starting with the set of constraints B Y thu, and iteratively remove every constraint whose removal does not decrease the optimum cost, until we arrive at a minimal set, which is a basis. In total, we need to solve at most d SDPs, each of size Opd 2 q, which can be done in total time d Op1q. Finally, by the choice of w, any set containing a single constraint in S is a valid initial basis. Using the above formulation of Mahalanobis metric learning as an LP-type problem, we can obtain our approximation scheme. Our algorithm uses as a subroutine an exact algorithm for the problem (that is, for the special case where we seek to find a mapping that satisfies all constraints). We first present the exact algorithm and then show how it can be used to derive the approximation scheme. An exact algorithm. obtained a simple randomized linear-time algorithm for the minimum enclosing ball and minimum enclosing ellipsoid problems. This algorithm naturally extends to general LP-type problems (we refer the reader to ; for further details). With the interpretation of Mahalanobis metric learning as an LP-type problem given above, we thus obtain a linear time algorithm for in R d, for any constant d P N. The ing algorithm on a set of constraints F Ď H is implemented by the procedure Exact-LPTMLpF; Hq, which is presented in Algorithm 1. The procedure LPTMLpF; Bq takes as input sets of constraints F, B Ď H. It outputs a solution A P R dˆd to the problem induced by the set of constraints F Y B, such that all constraints in B are tight (that is, they hold with equality); if no such solution solution exists, then it returns nil. The procedure Basic-LPTMLpBq computes LPTMLpH; Bq. The analysis of implies that when Basic-LPTMLpBq is called, the cardinality of B is at most the combinatorial dimension, which by Lemma 2.1 is Opd 2 q. Thus the procedure Basic-LPTML can be implemented using one initial basis computation (B0) and Opd 2 q basis computations (B2), which by Lemma 2.2 takes total time d Op1q. Algorithm 1 An exact algorithm for Mahalanobis metric learning. An p1`εq-approximation algorithm. It is known that the above exact linear-time algorithm leads to an nearly-linear-time approximation scheme for LP-type problems. This is summarized in the following. We refer the reader to for a more detailed treatment. Lemma 2.3 (, Ch. 15). Let A be some LP-type problem of combinatorial dimension κ ą 0, defined by some pair pH, wq, and let ε ą 0. There exists a randomized algorithm which given some instance F Ď H, with |F | " n, outputs some basis B Ď F, that violates at most p1`εqk constraints in F, such that wpBq ď wpB 1 q, for any basis B 1 violating at most k constraints in F, in, log κ`2 n kε 2κ`2 )¯p t 1`t2 q¯, where t 0 is the time needed to compute an arbitrary initial basis of A, and t 1, t 2, and t 3 are upper bounds on the time needed to perform the basic operations (B0), (B1) and (B2) respectively. The algorithm succeeds with high probability. For the special case of Mahalanobis metric learning, the corresponding algorithm is given in Algorithm 2. The approximation guarantee for this algorithm is summarized in 1.1. We can now give the proof of our main . Proof of Theorem 1.1. Follows immediately by Lemmas 2.2 and 2.3. Algorithm 2 An approximation algorithm for Mahalanobis metric learning. procedure LPTML(F) for i " 0 to log 1`ε n do p Ð p1`εq´i for j " 1 to log Opd 2 q n do subsample F j Ď F, where each element is chosen independently with probability p A i,j Ð Exact-LPTMLpF j q end for end for return a solution out of tA i,j u i,j, violating the minimum number of constraints in F end procedure Regularization. We now argue that the LP-type algorithm described above can be extended to handle certain types of regularization on the matrix A. In methods based on convex optimization, introducing regularizers that are convex functions can often be done easily. In our case, we cannot directly introduce a regularizing term in the objective function that is implicit in Algorithm 2. More specifically, let costpAq denote the total number of constraints of type and that A violates. Algorithm 2 approximately minimizes the objective function costpAq. A natural regularized version of Mahalanobis metric learning is to instead minimize the objective function cost 1 pAq:" costpAq`η¨regpAq, for some η ą 0, and regularizer regpAq. One typical choice is regpAq " trpACq, for some matrix C P R dˆd; the case C " I corresponds to the trace norm (see). We can extend the Algorithm 2 to handle any regularizer that can be expressed as a linear function on the entries of A, such as trpAq. The following summarizes the . Theorem 2.4. Let regpAq be a linear function on the entries of A, with polynomially bounded coefficients. For any d P N, ε ą 0, there exists a randomized algorithm for learning d-dimensional Mahalanobis metric spaces, which given an instance that admits a solution A 0 with cost 1 pA 0 q " c˚, computes a solution A with cost 1 pAq ď p1`εqc˚, in time d Op1q nplog n{εq Opdq, with high probability. Proof. If η ă ε t, for sufficiently large constant t ą 0, since the coefficients in regpAq are polynomially bounded, it follows that the largest possible value of η¨regpAq is Opεq, and can thus be omitted without affecting the . Similarly, if η ą p1{εqn t 1, for sufficiently large constant t 1 ą 0, since there are at most`n 2˘c onstraints, it follows that the term costpAq can be omitted form the objective. Therefore, we may assume w.l.o.g. that regpA 0 q P rε Op1q, p1{εqn Op1q s. We can guess some i " Oplog n`logp1{εqq, such that regpA 0 q P pp1`εq i´1, p1`εq i s. We modify the SDP used in the proof of Lemma 2.2 by introducing the constraint regpAq ď p1`εq i . Guessing the correct value of i requires Oplog n`logp1{εqq executions of Algorithm 2, which implies the running time bound. We now discuss some modifications of the algorithm described in the previous section that significantly improve its performance in practical scenarios, and have been integrated in our implementation. Move-to-front and pivoting heuristics. We use heuristics that have been previously used in algorithms for linear programming ; , minimum enclosing ball in R 3 , minimum enclosing ball and ellipsoid is R d, for any fixed d P N , as well as in fast implementations of minimum enclosing ball algorithms Gärtner. The move-to-front heuristic keeps an ordered list of constraints which gets reorganized as the algorithm runs; when the algorithm finds a violation, it moves the violating constraint to the beginning of the list of the current sub-problem. The pivoting heuristic further improves performance by choosing to add to the basis the constraint that is "violated the most". For instance, for similarity constraints, we pick the one that is mapped to the largest distance greater than u; for dissimilarity constraints, we pick the one that is mapped to the smallest distance less than. Approximate counting. The main loop of Algorithm 2 involves counting the number of violated constraints in each iteration. In problems involving a large number of constraints, we use approximate counting by only counting the number of violations within a sample of Oplog 1{εq constraints. We denote by LPTML t for the version of the algorithm that performs a total of t iterations of the inner loop. Early termination. A bottleneck of Algorithm 2 stems from the fact that the inner loop needs to be executed for log Opd 2 q n iterations. In practice, we have observed that a significantly smaller number of iterations is needed to achieve high accuracy. Parallelization. Algorithm 2 consists of several executions of the algorithm Exact-LPTML on independently sampled sub-problems. Therefore, Algorithm 2 can trivially be parallelized by distributing a different set of sub-problems to each machine, and returning the best solution found overall. We have implemented Algorithm 2, incorporating the practical improvements described in Section 3, and performed experiments on synthetic and real-world data sets. Our LPTML implementation and documentation can be found at the supplementary material 1 . We now describe the experimental setting and discuss the main findings. Classification task. Each data set used in the experiments consists of a set of labeled points in R d . The label of each point indicates its class, and there is a constant number of classes. The set of similarity constraints S (respt. dissimilarity constraints D) is formed by uniformly sampling pairs of points in the same class (resp. from different classes). We use various algorithms to learn a Mahalanobis metric for a labeled input point set in R d, given these constraints. The values u and are chosen as the 90th and 10th percentiles of all pairwise distances. We used 2-fold cross-validation: At the training phase we learn a Mahalanobis metric, and in the testing phase we use k-NN classification, with k " 4, to evaluate the performance. Data sets. We have tested our algorithm on the following synthetic and real-world data sets: 1. Real-world: We have tested the performance of our implementation on the Iris, Wine, Ionosphere and Soybean data sets from the UCI Machine Learning Repository 2. 2. Synthetic: Next, we consider a synthetic data set that is constructed by first sampling a set of 100 points from a mixture of two Gaussians in R 2, with identity covariance matrices, and with means p´3, 0q and p3, 0q respectively; we then apply a linear transformation that stretches the y axis by a factor of 40. This linear transformation reduces the accuracy of k-NN on the underlying Euclidean metric with k " 4 from 1 to 0.68. 3. Synthetic + Adversarial Noise: We modify the above synthetic data set by introducing a small fraction of points in an adversarial manner, before applying the linear transformation. Figure 3b depicts the noise added as five points labeled as one of the classes, and sampled from a Gaussian with identity covariance matrix and mean p´100, 0q (Figure 3a). Algorithms. We compare the performance of our algorithm against ITML and LMNN. We used the implementations provided by the authors of these works, with minor modifications. Accuracy. Algorithm 2 minimizes the number of violated pairwise distance constraints. It is interesting to examine the effect of this objective function on the accuracy of k-NN classification. Comparison to ITML and LMNN. We compared the accuracy obtained by LPTML t, for t " 2000 iterations, against ITML and LMNN. Table 1 summarizes the findings on the real-world and data sets and the synthetic data set without adversarial noise. We observe that LPTML achieves accuracy that is comparable to ITML and LMNN. We observe that LPTML outperforms ITML and LMNN on the Synthetic + Adversarial Noise data set. This is due to the fact that the introduction of adversarial noise causes the relaxations used in ITML and LMNN to be biased towards contracting the x-axis. In contrast, the noise does not "fool" LPTML because it only changes the optimal accuracy by a small amount. The are summarized in Figure 2. | Fully parallelizable and adversarial-noise resistant metric learning algorithm with theoretical guarantees. | 1,216 | scitldr |
Standard image captioning tasks such as COCO and Flickr30k are factual, neutral in tone and (to a human) state the obvious (e.g., “a man playing a guitar”). While such tasks are useful to verify that a machine understands the content of an image, they are not engaging to humans as captions. With this in mind we define a new task, Personality-Captions, where the goal is to be as engaging to humans as possible by incorporating controllable style and personality traits. We collect and release a large dataset of 201,858 of such captions conditioned over 215 possible traits. We build models that combine existing work from (i) sentence representations (Mazaré et al., 2018) with Transformers trained on 1.7 billion dialogue examples; and (ii) image representations with ResNets trained on 3.5 billion social media images. We obtain state-of-the-art performance on Flickr30k and COCO, and strong performance on our new task. Finally, online evaluations validate that our task and models are engaging to humans, with our best model close to human performance. If we want machines to communicate with humans, they must be able to capture our interest, which means spanning both the ability to understand and the ability to be engaging, in particular to display emotion and personality as well as conversational function BID17 BID18 BID41 BID19.Communication grounded in images is naturally engaging to humans BID15, and yet the majority of studies in the machine learning community have so far focused on function only: standard image captioning BID36 requires the machine to generate a sentence which factually describes the elements of the scene in a neutral tone. Similarly, visual question answering BID2 and visual dialogue BID6 require the machine to answer factual questions about the contents of the image, either in single turn or dialogue form. They assess whether the machine can perform basic perception over the image which humans take for granted. Hence, they are useful for developing models that understand content, but are not useful as an end application unless the human cannot see the image, e.g. due to visual impairment BID13.Standard image captioning tasks simply state the obvious, and are not considered engaging captions by humans. For example, in the COCO BID5 and Flickr30k BID52 tasks, some examples of captions include "a large bus sitting next to a very tall building" and "a butcher cutting an animal to sell", which describe the contents of those images in a personality-free, factual manner. However, humans consider engaging and effective captions ones that "avoid stating the obvious", as shown by advice to human captioners outside of machine learning.1 For example, "If the bride and groom are smiling at each other, don't write that they are smiling at each other. The photo already visually shows what the subject is doing. Rephrase the caption to reflect the story behind the image". Moreover, it is considered that "conversational language works best. Write the caption as though you are talking to a family member or friend".2 These instructions for human captioners to engage human readers seem to be in direct opposition to standard captioning datasets. In this work we focus on image captioning that is engaging for humans by incorporating personality. As no large dataset exists that covers the range of human personalities, we build and release a new dataset, PERSONALITY-CAPTIONS, with 201,858 captions, each conditioned on one of 215 Standard captioning output: A plate with a sandwich and salad on it. Our model with different personality traits: Sweet That is a lovely sandwich. This sandwich looks so delicious! My goodness! Anxious I'm afraid this might make me sick if I eat it. Sympathetic I feel so bad for that carrot, about to be consumed. Arrogant I make better food than this Optimistic It will taste positively wonderful! Money-minded I would totally pay $100 for this plate. Figure 1: Comparison of a standard captioning model compared to our TransResNet model's predictions on the same image conditioned on various personality traits. Our model is trained on the new PERSONALITY-CAPTIONS dataset which covers 215 different personality traits. The standard captioning system used for comparison is the best COCO UPDOWN model described in Section 4.2. different possible personality traits. We show that such captions are far more engaging to humans than traditional ones. We then develop model architectures that can simultaneously understand image content and provide engaging captions for humans. To build strong models, we consider both retrieval and generative variants, and leverage state-of-the-art modules from both the vision and language domains. For image representations, we employ the work of BID28 that uses a ResNeXt architecture trained on 3.5 billion social media images which we apply to both. For text, we use a Transformer sentence representation following BID32 ) trained on 1.7 billion dialogue examples. Our generative model gives a new state-of-the-art on caption generation on COCO, and our retrieval architecture, TransResNet, yields the highest known hits@1 score on the Flickr30k dataset. To make the models more engaging to humans, we then adapt those same architectures to the PERSONALITY-CAPTIONS task by conditioning the input image on the given personality traits, giving strong performance on our new task. In particular, when compared to human captions, annotators preferred our retrieval model's captions over human ones 49.5% of the time, where the difference is not statistically significant. A large body of work has focused on developing image captioning datasets and models that work on them. In this paper we also perform experiments on the COCO BID5 and Flickr30k BID52 datasets, comparing to a range of models, including both generative models such as in BID45 BID49 BID1 and retrieval based such as in BID12 BID10 BID34. These setups measure the ability of models to understand the content of an image, but do not address more natural human communication. A number of works have tried to induce more engaging captions for human readers. One area of study is to make the caption personalized to the reader, e.g. by using user level features such as location and age BID7 or knowledge of the reader's active vocabulary BID38. Our work does not address this issue. Another research direction is to attempt to produce amusing captions either through wordplay (puns) BID4 or training on data from humour websites BID50. Our work focuses on a general set of personality traits, not on humour. Finally, closer to our work are approaches that attempt to model the style of the caption. Some methods have tried to learn style in an unsupervised fashion, as a supervised dataset like we have built in this work was not available. As a , evaluation was more challenging in those works, see e.g. BID30. Others such as BID51 have used small datasets like SentiCap BID31 with ∼800 images to inject sentiment into captions. BID11 collect a somewhat bigger dataset with 10,000 examples, FlickrStyle10K, but only covers two types of style (romantic and humorous). In contrast, our models are trained on the PERSONALITY-CAPTIONS dataset that has 215 traits and ∼200,000 images. Our work can also be linked to the more general area of human communication, separate from just factual captioning, in particular image grounded conversations between humans (Mostafazadeh . In those tasks, simple word overlap based automatic metrics are shown to perform weakly BID24 due to the intrinsically more diverse outputs in the tasks. As in those domains, we thus also perform human evaluations in this work to measure the engagingness of our setup and models. In terms of modeling, image captioning performance is clearly boosted with any advancements in image or text encoders, particularly the former. In this work we make use of the latest advancements in image encoding by using the work of BID28 which provides state-of-the-art performance on Imagenet image classification, but has so far not been applied to captioning. For text encoding we use the latest advances in attention-based representations using Transformers BID42 ; in particular, their use in retrieval models for dialogue by large-scale pretraining is adapted here for our captioning tasks. The PERSONALITY-CAPTIONS dataset is a large collection of (image, personality trait, caption) triples that we collected using crowd-workers, and will be made publicly available upon acceptance. We considered 215 possible personality traits which were constructed by selecting a subset from a curated list of 638 traits 3 that we deemed suitable for our captioning task. The traits are categorized into three classes: positive (e.g., sweet, happy, eloquent, humble, perceptive, witty), neutral (e.g., old-fashioned, skeptical, solemn, questioning) and negative (e.g., anxious, childish, critical, fickle, frivolous). Examples of traits that we did not use are allocentric, insouciant, flexible, earthy and invisible, due to the difficulty of their interpretation with respect to captioning an image. We use a randomly selected set of the images from the YFFC100M Dataset 4 to build our training, validation and test sets, selecting for each chosen image a random personality trait from our list. In each annotation round, an annotator is shown an image along with a trait. The annotators are then asked to write an engaging caption for the image in the context of the personality trait. It was emphasized that the personality trait describes a trait of the author of the caption, not properties of the content of the image. See Section D in the appendix for the exact instructions given to annotators. 4 ) trained on 3.5 billion Instagram pictures following the procedure described by BID28, which we refer to in the rest of the paper as ResNeXt-IG-3.5B. The authors provided the weights of their trained model to us. Both networks embed images in a 2048-dimensional vector which is the input for most of our models. In some of the caption generation models that make use of attention, we keep the spatial extent of the features by adapting activation before the last average pooling layer, and thus extract features with 7 × 7 × 2048 dimensions. We re-implemented three widely used previous/current state-of-the-art methods BID45 BID49 BID1 for image captioning as representatives of caption generation models. We refer them as SHOWTELL, SHOWATTTELL and UPDOWN respectively. We extract the image representation r I using the aforementioned image encoders. The SHOWTELL model uses image features with 2048 dimensions and the other models use image features with 7 × 7 × 2048 dimensions. In the case where we augment our models with personality traits, we learn an embedding for each trait, which is concatenated with each input of the decoder. Caption Decoders The SHOWTELL model first applies a linear projection to reduce image features into a feature vector with 512 dimensions. Similar to BID45, this embedding is the input for a LSTM model that generates the output sequence. In SHOWATTTELL, while the overall architecture is similar to BID49, we adopt the modification suggested by BID39 and input the attention-derived image features to the cell node of the LSTM. Finally, we use the UPDOWN model exactly as described in BID1. We perform a two-stage training strategy to train such caption generation models as proposed by BID39. In the first stage, we train the model to optimize the standard cross-entropy loss. In the second stage, we perform policy gradient with REINFORCE to optimize the non-differentiable reward function (CIDEr score in our case). During inference, we apply beam search (beam size=2) to decode the caption. We define a simple yet powerful retrieval architecture, named TransResNet. It works by projecting the image, personality, and caption in the same space S using image, personality, and text encoders. Image and Personality Encoders The representation r I of an image I is obtained by using the 2048-dimensional output of the image encoder described in Sec. 4.1 as input to a multi-layer perceptron with ReLU activation units and a final layer of 500 dimensions. To take advantage of personality traits in the PERSONALITY-CAPTIONS task, we embed each trait to a 500-dimensional vector to obtain its representation r P. Image and personality representations are then summed. Caption Encoders Each caption is encoded into a vector r C of the same size using a Transformer architecture BID42, followed by a two layer perceptron. We try two sizes of Transformer: a larger architecture (4 layers, 300 hidden units, 6 attention heads) and a smaller one (2 layers, 300 hidden units, 4 attention heads). We consider either training from scratch or pretraining our models. We either pretrain only the word embeddings, i.e. where we initialize word vectors trained using fastText BID3 ) trained on Wikipedia, or pretrain the entire encoder. For the latter, we follow the setup described in BID32: we train two encoders on a next-utterance retrieval task on a dataset of dialogs containing 1.7 billion pairs of utterances, where one encodes the context and another the candidates for the next utterance, their dot product indicates the degree of match, and they are trained with negative log-likelihood and k-negative sampling. We then initialize our system using the weights of the candidate encoder only, and then train on our task. For comparison, we also consider a simple bag-of-words encoder (pretrained or not). In this case, r C is the sum of the 300-dimensional word embeddings of the caption. In each case, given an input image and personality trait (I, P) and a candidate caption C, the score of the final combination is then computed as s(I, P, C) = (r I + r P) · r C. Figure 2: Our architecture TransResNet, used for our retrieval models. Training and Inference Given a pair I, P, and a set of candidates (c 1, .., c N), at inference time the predicted caption is the candidate c i that maximizes the score s(I, P, c i). At training time we pass a set of scores through a softmax and train to maximize the log-likelihood of the correct responses. We use mini-batches of 500 training examples; for each example, we use the captions of the other elements of the batch as negatives. Our overall TransResNet architecture is detailed in Figure 2. We first test our architectures on traditional caption datasets to assess their ability to factually describe the contents of images in a neutral tone. We then apply the same architectures to PERSONALITY-CAPTIONS to assess their ability to produce engaging captions conditioned on personality. The latter is tested with both automatic metrics and human evaluation of engagingness. Generative Models For our generative models, we test the quality of our implementations of existing models (SHOWTELL, SHOWATTTELL and UPDOWN) as well as the quality of our image encoders, where we compare ResNet152 and ResNeXt-IG-3.5B. We report performance on the COCO caption dataset BID23. We evaluate BLEU BID37, ROUGE-L BID22, CIDEr and SPICE BID0 and compare model's performances to state-of-the-art models under BID20's setting. The are shown in TAB3. Models trained with ResNeXt-IG-3.5B features consistently outperform their counterparts with ResNet152 features, demonstrating the effectiveness of ResNeXt-IG-3.5B beyond the original image classification and detection in BID28. More importantly, our best model (UPDOWN) either outperforms or is competitive with state-ofthe-art single model performance BID1 Retrieval Models We compare our retrieval architecture, TransResNet, to existing models reported in the literature on the COCO caption and Flickr30k tasks. We evaluate retrieval metrics R@1, R@5, R@10, and compare our model performance to state-of-the-art models under the setting of BID20 ). The are given in Table 4 (for more details, see TAB0 in the appendix for COCO and Flickr30k, respectively). For our model, we see large improvements using ResNeXt-IG-3.5B compared to Resnet152, and stronger performance with a Transformer-based text encoding compared to a bag-of-words encoding. Pretraining the text encoder also helps substantially (see Appendix A for more analysis of pretraining of our systems). Our best models are competitive on COCO and are state-of-the-art on Flickr30k by a large margin (68.4 R@1 for our model vs. 56.8 R@1 for the previous state-of-the-art). Generative models We first train the aforementioned caption generation models without using the personality traits. This setting is similar to standard image captioning, and TAB5 shows that the three caption generation models that we considered are ranked in the same order, with the UPDOWN model being the most effective. The best are again obtained using the ResNeXt-IG-3.5B features. Adding the embedding of the personality trait allows our best model to reach a CIDEr score of 22.0, showing the importance of modeling personality in our new task. Note that all scores are lower than for the COCO captioning task. Indeed standard image captioning tries to produce text descriptions that are semantically equivalent to the image, whereas PERSONALITY-CAPTIONS captures how a human responds to a given image when speaking to another human when both can see the image -which is rarely to simply state its contents. Hence, PERSONALITY-CAPTIONS has intrinsically more diverse outputs, similar to found in other human communication tasks BID24. For that reason we perform human evaluation in Section 5.3 in addition to automatic evaluations. Retrieval models Similarly we compare the effect of various configurations of our retrieval model, TransResNet. The models are evaluated in terms of R@1, where for each sample there are 100 candidates to rank: 99 randomly chosen candidates from the test set plus the true label. TAB6 shows the scores obtained on the test set of PERSONALITY-CAPTIONS. Again, the impact of using the image encoder trained on billions of images is considerable, we obtain 53.5% for our best ResNeXt-IG-3.5B model, and 34.4% for our best Resnet152 model. Conditioning on the personality traits is also very important (53.5% vs. 38.5% R@1 for the best variants with and without conditioning). Transformer text encoders also outperform bag-of-word embeddings encoders, Table 4: Retrieval model performance on Flickr30k and COCO caption using the splits of BID20 where pretraining for either type of encoder helps. For Transformers pretraining the whole network performed better than just pretraining the word embeddings, see Appendix A.Example predictions of our best model, TransResNet (ResNeXt-IG-3.5B), are given in TAB2. The goal of PERSONALITY-CAPTIONS is to be engaging to human readers by emulating human personality traits. We thus test our task and models in a set of human evaluation studies. Evaluation Setup Using 500 random images from the YFCC-100M dataset that are not present in PERSONALITY-CAPTIONS, we obtain captions for them using a variety of methods, as outlined in the sections below, including both human authored captions and model predicted captions. Using a separate set of human annotators, comparisons are then done pairwise: we show each image, with two captions to compare, to five separate annotators and ask them to choose the "more engaging" caption. For experiments where both captions are conditioned on a personality, we show the annotator the personality; otherwise, the personality is hidden. We then report the percentage of the time one method is chosen over the other. The are summarized in FIG0. We compare human authored PERSONALITY-CAPTIONS captions to human authored traditional neutral (COCO-like) captions. Captions conditioned on a personality were found to be significantly more engaging than those that were neutral captions of the image, with a win rate of 64.5%, which is statistically significant using a binomial two-tailed test. We compare the best-performing models from Section 5.2 to human authored PERSONALITY-CAPTIONS captions. For each test image we condition both human and model on the same (randomly-chosen) personality trait. Our best TransResNet model from Sec. We also compare our models in a pairwise fashion directly, as measured by human annotators. The given in FIG0 (all statistically significant) show the same trends as we observed before: TransResNet with ResNext-IG-3.5B outperforms the same model with ResNet152 features with a win rate of 55.2%, showing the importance of image features. Additionally, TransResNetwith ResNext-IG-3.5B image features (with no pretraining) also substantially outperforms the UPDOWN model using ResNext-IG-3.5B with a winrate of 80.1%. In this work we consider models that can simultaneously understand image content and provide engaging captions for humans. To build strong models, we first leverage the latest advances in image and sentence encoding to create generative and retrieval models that perform well on standard image captioning tasks. In particular, we attain a new state-of-the-art on caption generation on COCO, and introduce a new retrieval architecture, TransResNet, that yields the highest known hits@1 score on the Flickr30k dataset. To make the models more engaging to humans, we then condition them on a set of controllable personality traits. To that end, we collect a large dataset, PERSONALITY-CAPTIONS to train such models. Using automatic metrics and human evaluations, we show that our best system is able to produce captions that are close to matching human performance in terms of engagement. Our benchmark will be made publicly available to encourage further model development, leaving the possibility of superhuman performance coming soon in this domain. A IMPACT OF PRETRAINED WORD EMBEDDINGS AND TEXT ENCODERS Table 7: More detailed for retrieval model performance on COCO Captions using the splits of BID20. For our TransResNet models, we compare two types of pretraining: Full indicates a model with a pretrained text encoder, while Word indicates a model with pretrained word embeddings only. Caption retrieval Pretraining R@1 R@5 R@10 Med Rank 1k Images m-CNN BID27 42.8 -84.1 2.0 UVS BID21 43.4 75.7 85.8 2.0 HM-LSTM 43.9 -87.8 2.0 Order Embeddings BID44 46.7 -88.9 2.0 Embedding Net BID46 50.4 79.3 69.4 -DSPE+Fisher Vector 50.1 -89.2 -sm-LSTM BID16 53.2 83.1 91.5 1.0 VSE++ (ResNet, FT) BID10 64.6 90.0 95.7 1.0 GXN (i2t+t2i) BID12 68. BID44 23.3 -65.0 5.0 VSE++ (ResNet, FT) BID10 41.3 71.1 81.2 2.0 GXN (i2t+t2i) BID12 42. Table 8: Retrieval model performance on Flickr30k using the splits of BID20. For our models, we compare two types of pretraining: Full indicates a model with a pretrained text encoder, while Word indicates a model with pretrained word embeddings only. Caption retrieval Pretraining R@1 R@5 R@10 Med Rank UVS BID21 23.0 50.7 62.9 5.0 UVS (Github) 29.8 58.4 70.5 4.0 Embedding Net BID46 40.7 69.7 79.2 -DAN BID34 41.4 73.5 82.5 2.0 sm-LSTM BID16 42.5 71.9 81.5 2.0 2WayNet BID8 49.8 67.5 --VSE++ (ResNet, FT) BID10 52.9 80.5 87.2 1.0 DAN (ResNet) BID34 55.0 81.8 89.0 1.0 GXN (i2t+t2i) BID12 56. Engaging-only Captions Instead of asking to author a caption based on a personality trait, we can ask humans to simply write an "engaging" caption instead, providing them with no personality cue. We found that human annotators overall preferred captions written by those unconditioned on a personality by a slight margin (∼ 54%). To further understand this difference, we split the images into three subsets based on the personality on which the PERSONALITY-CAPTIONS annotator conditioned their caption, i.e. whether the personality was positive, negative, or neutral. We then examined the engagingness rates of images for each of these subsets. In the set where PERSONALITY-CAPTIONS annotators were provided with positive personalities, which totaled 185 out of the 500 images, we found that human annotators preferred the captions conditioned on the personality to those that were not. However, in the other two sets, we found that the unconditioned captions were preferred to the negative or neutral ones. For these two subsets, we believe that, without the context of any personality, annotators may have preferred the inherently more positive caption provided by someone who was asked to be engaging but was not conditioned on a personality. Diversity of captions We found that the captions written via our method were not only more engaging for positive personality traits, but also ed in more diversity in terms of personality traits. To measure this diversity, we constructed a model that predicted the personality of a given comment. The classifier consists in the same Transformer as described in 4.3, pre-trained on the same large dialog corpus, followed by a softmax over 215 units. We then compare the total number of personality types as predicted by the classifier among each type of human-labeled data: "engaging" captions conditioned on personalities, "engaging" captions not conditioned on personalities, and traditional image captions. That is, we look at each caption given by the human annotators, assign it a personality via the classifier, and then look at the total set of personalities we have at the end for each set of human-labeled data. For example, out of the 500 human-generated traditional captions, the classifier found 63% of all possible positive personalities in this set of captions. As indicated in TAB0, the human annotators who were assigned a personality produce more diverse captions, particularly negatively and neutrally conditioned ones, as compared to human annotators who are just told to be "engaging" or those who are told to write an image caption. The ultimate test of our generative and retrieval models on PERSONALITY-CAPTIONS is performed using human evaluations. Comparing them using automatic metrics is typically difficult because retrieval methods perform well with ranking metrics they are optimized for and generative models perform well with word overlap metrics they are optimized for, but neither of these necessarily correlate with human judgements, see e.g..Nevertheless, here we compare our generative and retrieval models directly with automatic metrics on COCO. We computed the BLEU, CIDEr, SPICE, and ROUGE-L scores for our best TransResNet model. The comparison is given in TAB0. TAB0: Generative and retrieval model performance on COCO caption using the test split of BID20 That is so cool! I I love street art! OptimisticThe future is bright for people who can dream in artistic ways. Critical I do believe this taggers verbage is a tad junvenile Charming What a charming wall. Adventurous I think I could create art like that, I will go learn and take action. The color of this flower is absolutely astounding. I can't believe it. Wishful I always wish I could grow these types of flowers. Sweet Beautiful flowers! I would give them to you. RomanticThe pink flowers would make a beautiful bouquet for my wife. Oh my, what a lovely purple color of nature's new sprouts! TAB0: More example predictions from our best TRANSRESNET model on the PERSONALITY-CAPTIONS validation set. | We develop engaging image captioning models conditioned on personality that are also state of the art on regular captioning tasks. | 1,217 | scitldr |
Machine learning (ML) models trained by differentially private stochastic gradient descent (DP-SGD) have much lower utility than the non-private ones. To mitigate this degradation, we propose a DP Laplacian smoothing SGD (DP-LSSGD) to train ML models with differential privacy (DP) guarantees. At the core of DP-LSSGD is the Laplacian smoothing, which smooths out the Gaussian noise used in the Gaussian mechanism. Under the same amount of noise used in the Gaussian mechanism, DP-LSSGD attains the same DP guarantee, but a better utility especially for the scenarios with strong DP guarantees. In practice, DP-LSSGD makes training both convex and nonconvex ML models more stable and enables the trained models to generalize better. The proposed algorithm is simple to implement and the extra computational complexity and memory overhead compared with DP-SGD are negligible. DP-LSSGD is applicable to train a large variety of ML models, including DNNs. Many released machine learning (ML) models are trained on sensitive data that are often crowdsourced or contain private information (; ;). With overparameterization, deep neural nets (DNNs) can memorize the private training data, and it is possible to recover them and break the privacy by attacking the released models . For example, Fredrikson et al. demonstrated that a model-inversion attack can recover training images from a facial recognition system . Protecting the private data is one of the most critical tasks in ML. Differential privacy (DP) is a theoretically rigorous tool for designing algorithms on aggregated databases with a privacy guarantee. The idea is to add a certain amount of noise to randomize the output of a given algorithm such that the attackers cannot distinguish outputs of any two adjacent input datasets that differ in only one entry. For repeated applications of additive noise based mechanisms, many tools have been invented to analyze the DP guarantee for the model obtained at the final stage. These include the basic and strong composition theorems and their refinements (; 2010;), the moments accountant , etc. Beyond the original notion of DP, there are also many other ways to define the privacy, e.g., local DP , concentrated/zeroconcentrated DP , and Rényi-DP (RDP) . Differentially private stochastic gradient descent (DP-SGD) reduces the utility of the trained models severely compared with SGD. As shown in Figure 1, the training and validation losses of the logistic regression on the MNIST dataset increase rapidly when the DP guarantee becomes stronger. The convolutional neural net (CNN) 1 trained by DP-SGD has much lower testing accuracy than the non-private one on the MNIST. We will discuss the detailed experimental settings in Section 4. A natural question raised from such performance degradations is: Can we improve DP-SGD, with negligible extra computational complexity and memory cost, such that it can be used to train general ML models with improved utility? We answer the above question affirmatively by proposing differentially private Laplacian smoothing SGD (DP-LSSGD) to improve the utility in privacy-preserving empirical risk minimization (ERM). DP-LSSGD leverages the Laplacian smoothing as a post-processing to smooth the injected Gaussian noise in the differentially private SGD (DP-SGD) to improve the convergence of DP-SGD in training ML models with DP guarantee. The main contributions of our work are highlighted as follows: • We propose DP-LSSGD and prove its privacy and utility guarantees for convex/nonconvex optimizations. We prove that under the same privacy budget, DP-LSSGD achieves better utility, excluding a small term that is usually dominated by the other terms, than DP-SGD by a factor that is much less than one for convex optimization. • We perform a large number of experiments logistic regression and CNN to verify the utility improvement by using DP-LSSGD. Numerical show that DP-LSSGD remarkably reduces training and validation losses and improves the generalization of the trained private models. In Table 1, we compare the privacy and utility guarantees of DP-LSSGD and DP-SGD. For the utility, the notationÕ(·) hides the same constant and log factors for each bound. The constants d and n denote the dimension of the model's parameters and the number of training points, respectively. The numbers γ and β are positive constants that are strictly less than one, and D 0, D σ, G are positive constants, which will be defined in Section 3. σ, we will discuss this in detail in Section 4. There is a massive volume of research over the past decade on designing algorithms for privacypreserving ML. Objective perturbation, output perturbation, and gradient perturbation are the three major approaches to perform ERM with a DP guarantee.; considered both output and objective perturbations for privacy-preserving ERM, and gave theoretical guarantees for both privacy and utility for logistic regression and SVM. numerically studied the effects of learning rate and batch size in DP-ERM. studied stability, learnability and other properties of DP-ERM. proposed an adaptive per-iteration privacy budget in concentrated DP gradient descent. Variance reduction techniques, e.g., SVRG, have also been introduced to DP-ERM. The utility bound of DP-SGD has also been analyzed for both convex and nonconvex smooth objectives (; . analyzed the excess empirical risk of DP-ERM in a distributed setting. Besides ERM, many other ML models have been made differentially private. These include: clustering (; Y. ;), matrix completion , online learning , sparse learning , and topic modeling . exploited the ill-conditionedness of inverse problems to design algorithms to release differentially private measurements of the physical system. considered sparse linear regression in the local DP models. proposed distributed selective SGD to train deep neural nets (DNNs) with a DP guarantee in a distributed system, however, the obtained privacy guarantee was very loose. considered applying DP-SGD to train DNNs in a centralized setting. They clipped the gradient 2 norm to bound the sensitivity and invented the moment accountant to get better privacy loss estimation. proposed Private Aggregation of Teacher Ensembles/PATE based on the semi-supervised transfer learning to train DNNs, and this framework improves both privacy and utility on top of the work by. introduced new noisy aggregation mechanisms for teacher ensembles that enable a tighter theoretical DP guarantee. The modified PATE is scalable to the large dataset and applicable to more diversified ML tasks. Laplacian smoothing (LS) can be regarded as a denoising technique that performs post-processing on the Gaussian noise injected stochastic gradient. Denoising has been used in the DP earlier: Postprocessing can enforce consistency of contingency table releases and leads to accurate estimation of the degree distribution of private network . showed that post-processing by projecting linear regression solutions, when the ground truth solution is sparse, to a given 1 -ball can remarkably reduce the estimation error. used Expectation-Maximization to denoise a class of graphical models' parameters. showed that in the output perturbation based differentially private algorithm design, denoising dramatically improves the accuracy of the Gaussian mechanism in the high-dimensional regime. To the best of our knowledge, we are the first to design a denoising technique on the Gaussian noise injected gradient to improve the utility of the trained private ML models. We use boldface upper-case letters A, B to denote matrices and boldface lower-case letters x, y to denote vectors. For vectors x and y and positive definite matrix A, we use x 2 and x A to denote the 2 -norm and the induced norm by A, respectively; x, y denotes the inner product of x and y; and λ i (A) denotes the i-th largest eigenvalue of A. We denote the set of numbers from 1 to n by [n]. N (0, I d×d) represents d-dimensional standard Gaussian. This paper is organized in the following way: In Section 2, we introduce the DP-LSSGD algorithm. In Section 3, we analyze the privacy and utility guarantees of DP-LSSGD for both convex and nonconvex optimizations. We numerically verify the efficiency of DP-LSSGD in Section 4. We conclude this work and point out some future directions in Section 5. 2 PROBLEM SETUP AND ALGORITHM In this paper, we consider empirical risk minimization problem as follows. Given a training set S = {(x 1, y 1),..., (x n, y n)} drawn from some unknown but fixed distribution, we aim to find an empirical risk minimizer that minimizes the empirical risk as follows, where F (w) is the empirical risk (a.k.a., training loss), f i (w) = (w; x i, y i) is the loss function of a given ML model defined on the i-th training example (x i, y i), and w ∈ R d is the model parameter we want to learn. Empirical risk minimization serves as the mathematical foundation for training many ML models that are mentioned above. The LSSGD for solving is given by where η is the learning rate, ∇f i k denotes the stochastic gradient of F evaluated from the pair of input-output {x i k, y i k}, and B k is a random subset of size b from [n]. Let A σ = I − σL for σ ≥ 0 being a constant, where I ∈ R d×d and L ∈ R d×d are the identity and the discrete one-dimensional Laplacian matrix with periodic boundary condition, respectively. Therefore, When σ = 0, LSSGD reduces to SGD. Note that A σ is positive definite with condition number 1 + 4σ that is independent of A σ's dimension, and LSSGD guarantees the same convergence rate as SGD in both convex and nonconvex optimization. Moreover, Laplacian smoothing (LS) can reduce the variance of SGD on-thefly, and lead to better generalization in training many ML models including DNNs . T and * is the convolution operator. By the fast Fourier transform (FFT), we have A, where the division in the right hand side parentheses is performed in a coordinate wise way. DP ERM aims to learn a DP model, w, for the problem. A common approach is injecting Gaussian noise into the stochastic gradient, and it ing in the following DP-SGD where n is the injected Gaussian noise for DP guarantee. Note that the LS matrix A −1 σ can remove the noise in v. If we assume v is the initial signal, then A −1 σ v can be regarded as performing an approximate diffusion step on the initial noisy signal which removes the noise from v. We will provide a detailed argument for the diffusion process in the appendix. As numerical illustrations, we consider the following two signals: We reshape v 2 into 1D with row-major ordering and then perform LS. Figure 2 shows that LS can remove noise efficiently. This noise removal property enables LSSGD to be more stable to the noise injected stochastic gradient, therefore improves training DP models with gradient perturbations. We propose the following DP-LSSGD for solving with DP guarantee In this scheme, we first inject the noise n to the stochastic gradient ∇f i k (w k), and then apply the LS operator A −1 σ to denoise the noisy stochastic gradient, ∇f i k (w k) + n, on-the-fly. We assume that each component function f i in is G-Lipschitz. The DP-LSSGD for finite-sum optimization is summarized in Algorithm 1. Compared with LSSGD, the main difference of DP-LSSGD lies in injecting Gaussian noise into the stochastic gradient, before applying the Laplacian smoothing, to guarantee the DP. initial guess of w, (, δ): the privacy budget, η: the step size, T: the total number of iterations. and ν is defined in Theorem 1, and In this section, we present the privacy and utility guarantees for DP-LSSGD. The technical proofs are provided in the appendix. Definition 1 ((, δ)-DP). A randomized mechanism M: S N → R satisfies (, δ)-DP if for any two adjacent datasets S, S ∈ S N differing by one element, and any output subset O ⊆ R, it holds that Theorem 1 (Privacy Guarantee). Suppose that each component function f i is G-Lipschitz. Given the total number of iterations T, for any δ > 0 and privacy budget 2 ≤ 5T log(1/δ)b 2 /n 2, DP-LSSGD, with injected Gaussian noise N (0, ν 2) for each coordinate, satisfies (, δ)-DP with ν 2 = 8T αG 2 /(n 2), where α = 2 log(1/δ)/ + 1. Remark 1. It is straightforward to show that the noise in Theorem 1 is in fact also tight to guarantee the (, δ)-DP for DP-SGD. For convex ERM, DP-LSSGD guarantees the following utility in terms of the gap between the ergodic average of the points along the DP-LSSGD path and the optimal solution w *. Theorem 2 (Utility Guarantee for convex optimization). Suppose F is convex and each component function Aσ and w * is the global minimizer of F, the DP-LSSGD outputw =, where ω = 2σ+1− √ 4σ+1 2σ < 1. That is, γ converge to 0 almost exponentially as the dimension, d, increases. Remark 2. In the above utility bound for convex optimization, for different σ (σ = 0 corresponds to DP-SGD), the only difference lies in the term γ(D σ + G 2). The first part γD σ depends on the gap between initialization w 0 and the optimal solution w *. The second part γG 2 decrease monotonically as σ increases. σ should be selected to get an optimal trade-off between these two parts. Based on our test on multi-class logistic regression for MNIST classification, σ = 0 always outperforms the case when σ = 0. For nonconvex ERM, DP-LSSGD has the following utility bound measured in gradient norm. Theorem 3 (Utility Guarantee for nonconvex optimization). Suppose that F is nonconvex and each component function f i is G-Lipschitz and has L-Lipschitz continuous gradient. Given with w * being the global minimum of F, then the DP-LSSGD outputw = T −1 k=0 w k /T satisfies the following utility Proposition 2. In Theorem 3, β = It is worth noting that if we use the 2 -norm instead of the induced norm, we have the following utility guarantee In the 2 -norm, DP-LSSGD has a bigger utility upper bound than DP-SGD (set σ = 0 in ζ). However, this does not mean that DP-LSSGD has worse performance. To see this point, let us consider the following simple nonconvex function For two points a 1 = and a 2 = (1, √ 3/2), the distance to the local minima a * = are 2 and √ 7/2, while ∇f (a 1) 2 = 1 and ∇f (a 2) 2 = √ 13/2. So a 2 is closer to the local minima a * than a 1 while its gradient has a larger 2 -norm. In this section, we verify the efficiency of DP-LSSGD in training multi-class logistic regression and CNNs for MNIST and CIFAR10 classification. We use v ← v/ max (1, v 2 /C) to clip the gradient 2 -norms of the CNNs to C. The gradient clipping guarantee the Lipschitz condition for the objective functions. We train all the models below with (, 10 −5)-DP guarantee for different. For Logistic regression we use the privacy budget given by Theorem 1, and for CNNs we use the privacy budget in the Tensorflow privacy (Andrew & et al., 2019). We checked that these two privacy budgets are consistent. We ran 50 epochs of DP-LSSGD with learning rate scheduled as 1/t with t being the index of the iteration to train the 2 -regularized (regularization constant 10 −4) multi-class logistic regression. We split the training data into 50K/10K with batch size 128 for cross-validation. We plot the evolution of training and validation loss over iterations for privacy budgets (0.2, 10 −5) and (0.1, 10 −5) in Figure 3. We see that the training loss curve of DP-SGD (σ = 0) is much higher and more oscillatory (log-scale on the y-axis) than that of DP-LSSGD (σ = 1, 3). Also, the validation loss of the model trained by DP-LSSGD decays faster and has a much smaller loss value than that of the model trained by DP-SGD. Moreover, when the privacy guarantee gets stronger, the utility improvement by DP-LSSGD becomes more significant. Next, consider the testing accuracy of the multi-class logistic regression trained with (, 10 −5)-DP guarantee by DP-LSSGD includes σ = 0, i.e., DP-SGD. We list the test accuracy of logistic regression trained in different settings in Table 2. These reveal that DP-LSSGD with σ = 1, 2, 3 can improve the accuracy of the trained private model and also reduce the variance, especially when the privacy guarantee is very strong, e.g., (0.1, 10 −5). We know that the step size in DP-SGD/DP-LSSGD may affect the accuracy of the trained private models. We try different step size scheduling of the form {a/t|a = 0.5, 1.0, 1.5, 2.0, 2.5, 3.0}, where t is again the index of iteration, and all the other hyper-parameters are used the same as before. Figure. 4 plots the test accuracy of the logistic regression model trained with different learning rate scheduling and different privacy budget. We see that the private logistic regression model trained by DP-LSSGD always outperforms DP-SGD. In this subsection, we consider training a small CNN 2 with DP-guarantee for MNIST classification. We implement DP-LSSGD and DP-LSAdam (simply replace the noisy gradient in DP-Adam in the Tensorflow privacy with the Laplacian smoothed surrogate) into the Tensorflow privacy framework (Andrew & et al., 2019). We use the default learning rate 0.15 for DP-(LS)SGD and 0.001 for DP-(LS)Adam and decay them by a factor of 10 at the 10K-th iteration, norm clipping, batch size, and micro-batches. We vary the noise multiplier (NM), and larger NM guarantees stronger DP. As shown in Figure 5, the privacy budget increases at exactly the same speed (dashed red line) for four optimization algorithms. When the NM is large, i.e., DP-guarantee is strong, DP-SGD performs very well in the initial period. However, after a few epochs, the validation accuracy gets highly oscillatory and decays. DP-LSSGD can mitigate the training instability issue of DP-SGD. DP-Adam outperforms DP-LSSGD, and DP-LSAdam can further improve validation accuracy on top of DP-Adam. Next, we consider the effects of the LS constant (σ) and the learning rate in training the DP-CNN for MNIST classification. We fixed the NM to be 10, and run 60 epochs of DP-SGD and DP-LSSGD with different σ and different learning rate. We show the comparison of DP-SGD with DP-LSSGD with different σ in the left panel of Figure 6, and we see that as σ increases it becomes more stable in training CNNs with DP-guarantee even though initially it becomes slightly slower. In the middle panel of Figure 6, we plot the evolution of validation accuracy curves of the DP-CNN trained by DP-SGD and DP-LSSGD with different learning rate, where the solid lines represent for DP-LSSGD and dashed lines for DP-SGD. DP-LSSGD outperforms DP-SGD in all learning rates tested, and DP-LSSGD is much more stable than DP-SGD when a larger learning rate is used. Finally, we go back to the accuracy degradation problem raised in Figure 1. As shown in Figure 3, LS can efficiently reduce both training and validation losses in training multi-class logistic regression for MNIST classification. Moreover, as shown in the right panel of Figure 6, DP-LSSGD can improve the testing accuracy of the CNN used above significantly. In particular, DP-LSSGD improves the testing accuracy of CNN by 3.2% and 5.0% for (0.4, 10 −5) and (0.2, 10 −5), respectively, on top of DP-SGD. DP-LSAdam can further boost test accuracy. All the accuracies associated with any given privacy budget in Figure 6 (right panel), are the optimal ones searched over the obtained in the above experiments with different learning rate, number of epochs, and NM. Due to page limitation, we put the of DP-CNN for CIFAR10 classification in the appendix. In this paper, we integrated Laplacian smoothing with DP-SGD for privacy-presrving ERM. The ing algorithm is simple to implement and the extra computational cost compared with the DP-SGD is almost negligible. We show that DP-LSSGD can improve the utility of the trained private ML models both numerically and theoretically. It is straightforward to combine LS with other variance reduction technique, e.g., SVRG . To prove the privacy guarantee in Theorem 1, we first introduce the following 2 -sensitivity. Definition 2 (2 -Sensitivity). For any given function f (·), the 2 -sensitivity of f is defined by where S − S 1 = 1 means the data sets S and S differ in only one entry. We will adapt the concepts and techniques of Rényi DP (RDP) to prove the DP-guarantee of the proposed DP-LSSGD. Definition 3 (RDP). For α > 1 and ρ > 0, a randomized mechanism M: S n → R satisfies (α, ρ)-Rényi DP, i.e., (α, ρ)-RDP, if for all adjacent datasets S, S ∈ S n differing by one element, we have where the expectation is taken over M(S). Lemma 1. ) Given a function q: S n → R, the Gaussian Mechanism M = q(S) + n, where n ∼ N (0, ν 2 I), satisfies (α, α∆ 2 (q)/(2ν 2))-RDP. In addition, if we apply the mechanism M to a subset of samples using uniform sampling without replacement, M satisfies Moreover, the input of the i-th mechanism can be based on outputs of the previous (i − 1) mechanisms. Lemma 3. If a randomized mechanism M: S n → R satisfies (α, ρ)-RDP, then M satisfies (ρ + log(1/δ)/(α − 1), δ)-DP for all δ ∈. With the definition (Def. 3) and guarantees of RDP (Lemmas 1 and 2), and the connection between RDP and (, δ)-DP (Lemma 3), we can prove the following DP-guarantee for DP-LSSGD. Proof of Theorem 1. Let us denote the update of DP-SGD and DP-LSSGD at the k-th iteration starting from any given points w k andw k, respectively, as where B k is a mini batch that are drawn uniformly from [n], and |B k | = b is the mini batch size. We will show that with the aforementioned Gaussian noise N (0, ν 2) for each coordinate of n, the output of DP-SGD,w, after T iterations is (, δ)-DP. Let us consider the mechanismM k = b. According to Lemma 1, if we add noise with variance the mechanism M k will satisfy α, (n 2 /b 2) log(1/δ)/ 2(α − 1)T -RDP. By post-processing theorem, we immediately have that under the same noise, According to Lemma 1,M k will satisfy α, log(1/δ)/(α−1)T -RDP provided that ν 2 ≥ 1/1.25, because τ = b/n. Let α = 2 log(1/δ)/ + 1, we obtain thatM k satisfies 2 log(1/δ)/ + 1, /(2T) -RDP as long as we have Therefore, the following condition suffices Therefore, according to Lemma 2, we have w k satisfies 2 log(1/δ)/ + 1, k /(2T) -RDP. Finally, by Lemma 3, we have w k satisfies k /(2T) + /2, δ -DP. Therefore, the output of DP-SGD,w, is (, δ)-DP. Remark 3. In the above proof, we used the following estimate of the 2 sensitivity σ g, then according to we have where d is the dimension of d, and Moreover, if we assume the g is randomly sampled from a unit ball in a high dimensional space, then a high probability estimation of the compression ratio of the 2 norm can be derived from Lemma. 5., so for the above noise, it can give much stronger privacy guarantee. To prove the utility guarantee for convex optimization, we first show that the LS operator compresses the 2 norm of any given Gaussian random vector with a specific ratio in expectation. Lemma 4. Let x ∈ R d be the standard Gaussian random vector. Then Proof of Theorem 2. Recall that we have the following update rule w, where i k are drawn uniformly from [n], and n ∼ N (0, ν 2 I). Observe that Taking expectation with respect to i k and n given w k, we have where the second inequality is due to the convexity of F, and Lemma 4. It implies that Now taking the full expectation and summing up over T iterations, we have where According to the definition ofw and the convexity of F, we obtain To prove the utility guarantee for nonconvex optimization, we need the following lemma, which shows that the LS operator compresses the 2 norms of any given Gaussian random vector with a specific ratio in expectation. Lemma 5. Let x ∈ R d be the standard Gaussian random vector. Then Proof of Lemma 5. Let the eigenvalue decomposition of A −1 Proof of Theorem 3. Recall that we have the following update rule w, where i k are drawn uniformly from [n], and n ∼ N (0, ν 2 I). Since F is L-smooth, we have Taking expectation with respect to i k and n given w k, we have where the second inequality uses Lemma 5 and the last inequality is due to 1 − η k L/2 > 1/2. Now taking the full expectation and summing up over T iterations, we have 2. If we choose fix step size, i.e., η k = η, and rearranging the above inequality, and using which implies that B CALCULATIONS OF β AND γ To prove Proposition 1, we need the following two lemmas. Lemma 6 (Residue Theorem). Let f (z) be a complex function defined on C, then the residue of f around the pole z = c can be computed by the formula where the order of the pole c is n. Moreover, where {c i} be the set of pole(s) of f (z) inside {z||z| < 1}. The proof of Lemma 6 can be found in any complex analysis textbook. Lemma 7. For 0 ≤ θ ≤ 2π, suppose has the discrete-time Fourier transform of series f [k]. Then, for integer k, Proof. By definition, We compute by using Residue theorem. First, note that because; therefore, it suffices to compute) for nonnegative k. Set z = e iθ. Observe that cos(θ) = 0.5(z + 1/z) and dz = izdθ. Substituting in and simplifying yields that where the integral is taken around the unit circle, and are the roots of quadratic −σz 2 + (2σ + 1)z − σ. Note that α − lies within the unit circle; whereas, α + lies outside of the unit circle. Therefore, because k is nonnegative, α − is the only singularity of the integrand in within the unit circle. A straightforward application of the Residue Theorem, i.e., Lemma 6, yields that This completes the proof. Proof of Proposition 1. First observe that we can re-write γ as It remains to show that the above summation is equal to. This follows by lemmas 7 and standard sampling in Fourier analysis (i.e. sampling θ at points {2πj/d} ). Nevertheless, we provide the details here for completeness: Observe that that the inverse discrete-time Fourier transform of is given by otherwise. The proof is completed by substituting the of lemma 7 in the above sum and simplifying. We list some typical values of γ in Table 1.. Therefore, we have We list some typical values of β in Table 2. where v 0 is the discretization of f (x), and v ∆t is the numerical solution of at time ∆t. Therefore, we have v ∆t = (I − ∆tL) which is the LS with σ = ∆t. In this section, we will show that LS can also improve the utility of the DP-CNN trained by DP-SGD and DP-Adam for CIFAR10 classification. We simply replace the CNN architecture used above for MNIST classification with the benchmark architecture in the Tensorflow tutorial 3 for CIFAR10 classification. Also, we use the same set of parameters as that used for training DP-CNN for MNIST classification except we fixed the noise multiplier to be 2.0 and clip the gradient 2 norm to 3. As shown in Figure 7, LS can significantly improve the validation accuracy of the model trained by DP-SGD and DP-Adam, and the DP guarantee for all these algorithms are the same (dashed line in Figure 7). | We propose a differentially private Laplacian smoothing stochastic gradient descent to train machine learning models with better utility and maintain differential privacy guarantees. | 1,218 | scitldr |
We study the robust one-bit compressed sensing problem whose goal is to design an algorithm that faithfully recovers any sparse target vector $\theta_0\in\mathbb{R}^d$ \emph{uniformly} from $m$ quantized noisy measurements. Under the assumption that the measurements are sub-Gaussian, to recover any $k$-sparse $\theta_0$ ($k\ll d$) \emph{uniformly} up to an error $\varepsilon$ with high probability, the best known computationally tractable algorithm requires\footnote{Here, an algorithm is ``computationally tractable'' if it has provable convergence guarantees. The notation $\tilde{\mathcal{O}}(\cdot)$ omits a logarithm factor of $\varepsilon^{-1}$.} $m\geq\tilde{\mathcal{O}}(k\log d/\varepsilon^4)$. In this paper, we consider a new framework for the one-bit sensing problem where the sparsity is implicitly enforced via mapping a low dimensional representation $x_0$ through a known $n$-layer ReLU generative network $G:\mathbb{R}^k\rightarrow\mathbb{R}^d$. Such a framework poses low-dimensional priors on $\theta_0$ without a known basis. We propose to recover the target $G(x_0)$ via an unconstrained empirical risk minimization (ERM) problem under a much weaker \emph{sub-exponential measurement assumption}. For such a problem, we establish a joint statistical and computational analysis. In particular, we prove that the ERM estimator in this new framework achieves an improved statistical rate of $m=\tilde{\mathcal{O}} (kn\log d /\epsilon^2)$ recovering any $G(x_0)$ uniformly up to an error $\varepsilon$. Moreover, from the lens of computation, we prove that under proper conditions on the ReLU weights, our proposed empirical risk, despite non-convexity, has no stationary point outside of small neighborhoods around the true representation $x_0$ and its negative multiple. Furthermore, we show that the global minimizer of the empirical risk stays within the neighborhood around $x_0$ rather than its negative multiple. Our analysis sheds some light on the possibility of inverting a deep generative model under partial and quantized measurements, complementing the recent success of using deep generative models for inverse problems. Quantized compressed sensing investigates how to design the sensing procedure, quantizer and reconstruction algorithm so as to recover a high dimensional vector from a limited number of quantized measurements. The problem of one-bit compressed sensing, which aims at recovering a target vector θ 0 ∈ R d from single-bit observations y i = sign(a i, θ 0), i ∈ {1, 2, · · ·, m}, m d and random sensing vectors a i ∈ R d, is particularly challenging. Previous theoretical successes on this problem (e.g. ;) mainly rely on two key assumptions: The Gaussianity of the sensing vector a i, The sparsity of the vector θ 0 on a given basis. However, the practical significance of these assumptions are rather limited in the sense that it is difficult to generate Gaussian vectors and high dimensional targets in practice are often distributed * Equal Contribution 1 Here, an algorithm is "computationally tractable" if it has provable convergence guarantees. The notatioñ O(·) omits a logarithm factor of ε −1. near a low-dimensional manifold rather than sparse on some given basis. The goal of this work is to make steps towards addressing these two limitations. Specifically, we introduce a new framework for robust dithered one-bit compressed sensing where the structure of target vector θ 0 is represented via a ReLU network G: Building upon this framework, we propose a new recovery algorithm by solving an unconstrained ERM. We show this algorithm enjoys the following favorable properties: • Statistically, when taking measurements a i to be sub-exponential random vectors, with high probability and uniformly for any is the ball of radius R > 0 centered at the origin, the solution G(x m) to the ERM recovers the true vector G(x 0) up to error ε when the number of samples m ≥ O(kn log 4 (ε −1)(log d + log(ε −1))/ε 2 ). In particular, our does not require REC type assumptions adopted in previous analysis of generative signal recovery works and at the same time weakens the known sub-Gaussian assumption adopted in previous one-bit compressed sensing works. When the number of layers n is small, this meets the minimax optimal rate (up to a logarithm factor) for sparse recovery and simultaneously improves upon the best knownÕ(k log d/ε 4) statistical rate for computationally tractable algorithms. • Computationally, we show that solving the ERM and approximate the true representation x 0 ∈ R k is tractable. More specifically, we prove with high probability, there always exists a descent direction outside two small neighborhoods around x 0 and its negative multiple with radius O(ε 1/4), uniformly for any x 0 ∈ B k 2 (R) with R = (0.5+ε) −n/2 R, when the ReLU network satisfies a weight distribution condition with parameter ε > 0 and m ≥ O(kn log 4 (ε −1)(log d + log(ε −1))/ε 2 ). Furthermore, when ε is small enough, one guarantees that the solution x m stays within the neighborhood around x 0 (rather than its negative multiple). Our is achieved without assuming the REC type conditions and under quantization errors, thereby improving upon previously known computational guarantees for ReLU generative signal recovery in linear models with small noise. From a technical perspective, our proof makes use of the special piecewise linearity property of ReLU network. The merits of such a property in the current scenario are two folds: It allows us to replaces the generic chaining type bounds commonly adopted in previous works (e.g. Dirksen and Mendelson (2018a) ) by novel arguments that are "sub-Gaussian free". From a hyperplane tessellation point of view, we show that for a given accuracy level, a binary embedding of 2 (R) into Euclidean space is "easier" in that it requires less random hyperplanes than that of a bounded k sparse set. Notations. Throughout the paper, let S d−1 and B(x, r) denotes the unit sphere and the ball of radius r centered at We say a random variable is sub-exponential if its ψ 1 -norm is bounded. A random vector x ∈ R d is sub-exponential if there exists a a constant C > 0 such that sup t∈S d−1 x, t ψ1 ≤ C. We use x ψ1 to denote the minimal C such that this bound holds. Furthermore, C, C, c, c 1, c 2, c 3, c 4, c 5 denote absolute constants, their actual values can be different per appearance. In this paper, we focus on one-bit recovery model in which one observes quantized measurements of the following form where a ∈ R d is a random measurement vector, ξ ∈ R is a random pre-quantization noise with an unknown distribution, τ is a random quantization threshold (i.e. dithering noise), and x 0 ∈ R k is the unknown representation to be recovered. We are interested the high-dimensional scenario where the dimension of the representation space k is potentially much less than the ambient dimension d. The function G: R k → R d is a fixed ReLU neural network of the form: where σ(x) = max(x, 0) and σ • (x) denotes the entry-wise application of σ(·). We consider a scenario where the number of layers n is smaller than d, Throughout the paper, we assume that G(x 0) is bounded, i.e. there exists an R ≥ 1 such that G(x 0) 2 ≤ R, and we take τ ∼ Uni[−λ, +λ], i.e. a uniform distribution bounded by a chosen parameter λ > 0. Let {(a i, y i)} m i=1 be i.i.d. copies of (a, y). Our goal is to compute an We propose to solve the following ERM forx m: where It is worth mentioning, in general, there is no guarantee that the minimizer of L(x) is unique. Nevertheless, in Section §2.2, we will show that any solutionx m to this problem must stay inside small neighborhoods around the true signal x 0 and its negative multiple with high probability. Our statistical guarantee relies on the following assumption on the measurement vector and noise: Assumption 2.1. The measurement vector a ∈ R d is mean 0, isotropic and sub-exponential. The noise ξ is also a sub-exponential random variable. Under this assumption, we have the following main statistical performance theorem: Theorem 2.1. Suppose Assumption 2.1 holds and consider any ε ∈. Set the constant C a,ξ,R = max{c 1 (R a ψ1 + ξ ψ1), 1}, λ ≥ 4C a,ξ,R · log(64C ψ,ξ,R · ε −1) and Then, with probability at least 1 − c 3 exp(−u), ∀u ≥ 0, any solutionx m to satisfies for all x 0 such that G(x 0) 2 ≤ R, where c 1, c 2, c 3 ≥ 1 are absolute constants. Remark 2.1. It is easy to verify that the sample complexity enforced by holds when where C is a large enough absolute constant. This gives the m = O(kn log 4 (ε −1)(log d + log(ε −1))/ε 2 ) statistical rate. The question whether or not the dependency on n is redundant (comparing to that of sparse recovery guarantees) warrants further studies. Remark 2.2. Note that our is a uniform recovery in the sense that the bound G(x m) − G(x 0) 2 ≤ ε holds with high probability uniformly for any target x 0 ∈ R k such that G(x 0) 2 ≤ R. This should be distinguished from known bounds (; ; ;) on sparse one-bit sensing which hold only for a fixed sparse vector. Furthermore, though assuming boundedness of G(x 0), our recovery algorithm solves for the minimizer without knowing this bound, which is favorable for practice. Before presenting the on the global landscape, we first introduce some notations used in the rest of this paper. For any fixed x, we define W +,x:= diag(W x > 0)W, in which we set the rows of W having negative product with x to be zeros. We further define W i,+,x:= diag(W i W i−1,+,x · · · W 1,+,x x > 0)W i, where only active rows of W i are kept. Thus, we can represent the RuLU network by G(x) = (Π n i=1 W i,+,x)x:= W n,+,x W n−1,+,x · · · W 1,+,x x. Definition 2.1 (Weighted Distribution Condition (WDC) ). The matrix W ∈ R d ×k satisfies the Weighted Distribution Condition with constant ε wdc if for any nonzero vectors x 1, x 2 ∈ R k, where we have θ x,z = ∠(x, z) and Mx ↔ẑ is the matrix that transformsx toẑ,ẑ tox, and ϑ to 0 for any ϑ ∈ span({x, z}) ⊥. Here we definex:= x/ x 2,ẑ:= z/ z 2. Before presenting Theorem 2.2 and 2.3, we define the directional derivative along non-zero z as where {x N} is a sequence such that x N → x and L(x) is differentiable at any x N. Such sequence must exist due to the piecewise linearity of G(x). For any x such that L(x) is differentiable, the gradient of L(x) is can be easily computed as Next, we will present Theorem 2.2 to show that under certain conditions, local minimum can only lie in small neighborhoods of two points x 0 and its negative multiple −ρ n x 0. Theorem 2.2. Suppose that G is a ReLU network with W i satisfying WDC with error ε wdc for all i = 1,..., n where n > 1. With probability 1 − c 1 exp(−u), for any nonzero x 0 satisfying x 0 2 ≤ R(1/2 + ε wdc) −n/2, if we set 88πn wdc, for any nonzero x, set v x = lim x N →x ∇L(x N) where {x N} is the sequence such that ∇L(x N) exists for all x N (and v x = ∇L(x) if L(x) is differentiable at x), then, there exists a constant ρ n ≤ 1 such that the directional derivative satisfies: where ρ n = n−1 i=0 Note that in the above theorem, case 1 indicates that the when the magnitude of the true representation x 0 2 2 is larger than the accuracy level ε wdc, the global minimum lies in small neighborhoods around x 0 and its scalar multiple −ρ n x 0, while for any point outside the neighborhoods of x 0 and −ρ n x 0, one can always find a direction with a negative directional derivative. Note that x = 0 is a local maximum due to D w L < 0 along any non-zero directions w. One the other hand, case 2 implies that when x 0 2 2 is smaller than ε wdc, the global minimum lies in the neighborhood around 0 (and thus around x 0). Moreover, in the following theorem, we further pin down the global minimum of the loss function for case 1. Theorem 2.3. Suppose that G is a ReLU network with W i satisfying WDC with error ε wdc for all i = 1,..., n where n > 1. Assume that c 1 n 3 ε 1/4 wdc ≤ 1, and x 0 is any nonzero vector satisfying x 0 2 ≤ R(1/2 + ε wdc) −n/2. Then, with probability 1 − 2c 4 exp(−u), for any x 0 such that x 0 2 2 ≥ 2 n ε wdc, setting λ ≥ 4C a,ξ,R · log(64C ψ,ξ,R · ε −1 wdc), and m ≥ c 2 a 2 ψ1 λ 2 log 2 (λm)(kn log(ed) + k log(2R) + k log m + u)/ε 2 wdc, we have L(x) < L(z), ∀x ∈ B(φ n x 0, c 3 n −5 x 0 2), and ∀z ∈ B(−ζ n x 0, c 3 n −5 x 0 2), where c 1, c 2, c 3, c 4 are absolute constants, φ n, ζ n are any scalars in [ρ n, 1]. Particularly, we have c 3 n −5 < min n≥2 ρ n such that the radius c 3 n −5 x 0 2 < ρ n x 0 2 for any n. Remark 2.3. The significance of Theorem 2.3 are two folds: first, it shows that the value of ERM is always smaller around x 0 compared to its negative multiple −ρ n x 0; second, when the accuracy level ε wdc is small, one can guarantee that the global minimum of L(x) stays around x 0. In particular, by Theorem 2.2 and 2.3, our theory implies that if ε wdc ≤ cn −76 for some constant c, then the global minimum of the proposed ERM is in B(φ n x 0, c 3 n −5 x 0 2). Since we do not focus on optimizing the order of n here, further improvement of such a dependency will be one of our future works. | We provide statistical and computational analysis of one-bit compressed sensing problem with a generative prior. | 1,219 | scitldr |
We introduce an unsupervised structure learning algorithm for deep, feed-forward, neural networks. We propose a new interpretation for depth and inter-layer connectivity where a hierarchy of independencies in the input distribution is encoded in the network structure. This in structures allowing neurons to connect to neurons in any deeper layer skipping intermediate layers. Moreover, neurons in deeper layers encode low-order (small condition sets) independencies and have a wide scope of the input, whereas neurons in the first layers encode higher-order (larger condition sets) independencies and have a narrower scope. Thus, the depth of the network is automatically determined---equal to the maximal order of independence in the input distribution, which is the recursion-depth of the algorithm. The proposed algorithm constructs two main graphical models: 1) a generative latent graph (a deep belief network) learned from data and 2) a deep discriminative graph constructed from the generative latent graph. We prove that conditional dependencies between the nodes in the learned generative latent graph are preserved in the class-conditional discriminative graph. Finally, a deep neural network structure is constructed based on the discriminative graph. We demonstrate on image classification benchmarks that the algorithm replaces the deepest layers (convolutional and dense layers) of common convolutional networks, achieving high classification accuracy, while constructing significantly smaller structures. The proposed structure learning algorithm requires a small computational cost and runs efficiently on a standard desktop CPU. Over the last decade, deep neural networks have proven their effectiveness in solving many challenging problems in various domains such as speech recognition BID17, computer vision BID28 BID16 BID46 and machine translation BID9. As compute resources became more available, large scale models having millions of parameters could be trained on massive volumes of data, to achieve state-of-the-art solutions for these high dimensionality problems. Building these models requires various design choices such as network topology, cost function, optimization technique, and the configuration of related hyper-parameters. In this paper, we focus on the design of network topology-structure learning. Generally, exploration of this design space is a time consuming iterative process that requires close supervision by a human expert. Many studies provide guidelines for design choices such as network depth BID46, layer width BID55, building blocks, and connectivity BID20 BID23. Based on these guidelines, these studies propose several meta-architectures, trained on huge volumes of data. These were applied to other tasks by leveraging the representational power of their convolutional layers and fine-tuning their deepest layers for the task at hand BID21 BID33. However, these meta-architecture may be unnecessarily large and require large computational power and memory for training and inference. The problem of model structure learning has been widely researched for many years in the probabilistic graphical models domain. Specifically, Bayesian networks for density estimation and causal discovery BID42 BID50. Two main approaches were studied: score-based (search-and-score) and constraint-based. Score-based approaches combine a scoring function, such as BDe BID10 and BIC BID44, with a strategy for searching through the space of structures, such as greedy equivalence search BID6. BID1 introduced an algorithm for sampling deep belief networks (generative model) and demonstrated its applicability to high-dimensional image datasets. Constraint-based approaches BID42 BID50 find the optimal structures in the large sample limit by testing conditional independence (CI) between pairs of variables. They are generally faster than score-based approaches BID54 ) and have a well-defined stopping criterion (e.g., maximal order of conditional independence). However, these methods are sensitive to errors in the independence tests, especially in the case of high-order conditional-independence tests and small training sets. Motivated by these methods, we propose a new interpretation for depth and inter-layer connectivity in deep neural networks. We derive a structure learning algorithm such that a hierarchy of independencies in the input distribution is encoded in the network structure, where the first layers encode higher-order independencies than deeper layers. Thus, the number of layers is automatically determined. Moreover, a neuron in a layer is allowed to connect to neurons in deeper layers skipping intermediate layers. An example of a learned structure, for MNIST, is given in Figure 1.We describe our recursive algorithm in two steps. In Section 2 we describe a base case-a singlelayer structure learning. In Section 3 we describe multi-layer structure learning by applying the key concepts of the base case, recursively (proofs are provided in Appendix A). In Section 4 we discuss related work. We provide experimental in Section 5, and conclude in Section 6. DISPLAYFORM0 a set of latent variables, and Y a class variable. Our algorithm constructs three graphical models and an auxiliary graph. Each variable is represented by a single node and a single edge may connect two distinct nodes. Graph G is a generative DAG defined over the observed and latent variables X ∪ H. Graph G Inv is called a stochastic inverse of G. Graph G D is a discriminative model defined over the observed, latent, and class variables X ∪ H ∪ Y. An auxiliary graph G X is defined over X (a CPDAG; an equivalence class of a Bayesian network) and is generated and maintained as an internal state of the algorithm. The parents set of a node X in G is denoted P a(X; G). The order of an independence relation is defined to be the condition set size. For example, if X 1 and X 2 are independent given X 3 and X 4, denoted X 1 ⊥ ⊥ X 2 |{X 3, X 4}, then the independence order is two. Figure 1: An example of a structure learned by our algorithm (classifying MNIST digits). Neurons in a layer may connect to neurons in any deeper layer. Depth is determined automatically. Each gather layer selects a subset of the input, where each input variable is gathered only once. A neural route, starting with a gather layer, passes through densely connected layers where it may split (copy) and merge (concatenate) with other routes in correspondence with the hierarchy of independencies identified by the algorithm. All routes merge into the final output layer (e.g., a softmax layer). We start by describing the key concepts of our approach using a simple scenario: learning the connectivity of a single-layer neural network. Assume the input joint distribution p(X) complies with the following property. Assumption 1. The joint distribution p(X) is faithful to a DAG G over observed X and latent nodes H, where for all X ∈ X and H ∈ H, P a(X; G) ⊆ H and P a(H; G) ⊆ H\H. DISPLAYFORM0 Note that the generative graphical model G can be described as a layered deep belief network where parents of a node in layer m can be in any deeper layer, indexes greater than m, and not restricted to the next layer m + 1. This differs from the common definition of deep belief networks BID22 BID1 where the parents are restricted to layer m + 1.It is desired to learn an efficient graph G having small sets of parents and a simple factorization of p(H) while maintaining high expressive power. We first construct an auxiliary graph, a CPDAG BID50, G X over X (an equivalence class of a fully visible Bayesian network) encoding only marginal independencies 1 (empty condition sets) and then construct G such that it can mimic BID42. That is, preserving all conditional dependencies of X in G X. DISPLAYFORM1 The simplest connected DAG that encodes statistical independence is the v-structure, a structure with three nodes X 1 → X 3 ← X 2 in which X 1 and X 2 are marginally independent X 1 ⊥ ⊥ X 2 and conditionally dependent X 1 ⊥ ⊥X 2 |X 3. In graphs encoding only marginal independencies, dependent nodes form a clique. We follow the procedure described by BID54 and decompose X into autonomous sets (complying with the Markov property) where one set, denoted X D (descendants), is the common child of all other sets, denoted X A1,..., X AK (ancestor sets). We select X D to be the set of nodes that have the lowest topological order in G X. Then, by removing X D from G X (temporarily for this step), the ing K disjoint sets of nodes (corresponding to K disjoint substructures) form the K ancestor sets DISPLAYFORM2. See an example in FIG1. Next, G is initialized to an empty graph over X. Then, for each ancestor set X Ai a latent variable H i is introduced and assigned to be a common parent of the pair (X Ai, X D). Thus, DISPLAYFORM3 Note that the parents of two ancestor sets are distinct, whereas the parents set of the descendant set is composed of all the latent variables. In the auxiliary graph G X, for each of the ing v-structures (X Ai → X D ← X Aj), a link between a parent and a child can be replaced by a common latent parent without introducing new independencies. For example, in Algorithm 1 summarizes the procedure of constructing G having a single latent layer. Note that we do not claim to identify the presence of confounders and their inter-relations as in BID14; BID45; BID2. Instead, we augment a fully observed Bayesian network with latent variables, while preserving conditional dependence. [a] DISPLAYFORM4 A stochastic inverse generated by the algorithm presented by.[c] A stochastic inverse generated by our method where the graph is a projection of a latent structure. A dependency induced by a latent Q is described using a bi-directional edge DISPLAYFORM5 having a class node Y that provides an explaining away relation for H A ↔ H B. That is, the latent Q is replaced by an observed common child Y. It is important to note that G represents a generative distribution of X and is constructed in an unsupervised manner (class variable Y is ignored). Hence, we construct G Inv, a graphical model that preserves all conditional dependencies in G but has a different node ordering in which the observed variables, X, have the highest topological order (parentless)-a stochastic inverse of G. Note that conditional dependencies among X are not required to be preserved in the stochastic inverse as these are treated (simultaneously) as observed variables (highest topological order).; BID41 presented a heuristic algorithm for constructing such stochastic inverses where the structure is a DAG (an example is given in Figure 3 -[b] ). However, these DAGs, though preserving all conditional dependencies, may omit many independencies and add new edges between layers. We avoid limiting G Inv to a DAG and instead limit it to be a projection of another latent structure BID42. That is, we assume the presence of additional hidden variables Q that are not in G Inv but induce dependency 2 among H. For clarity, we omit these variables from the graph and use bi-directional edges to represent the dependency induced by them. An example is given in Figure 3 -[c] where a bi-directional edge represents the effect of some variable Q ∈ Q on H A and H B. We construct G Inv in two steps:1. Invert all G edges (invert inter-layer connectivity).2. Connect each pair of latent variables, sharing a common child in G, with a bi-directional edge. This simple procedure ensures G G Inv over X ∪ H while maintaining the exact same number of edges between the layers (Proposition 1, Appendix A). XD ←− nodes having the lowest topological order identify autonomous sets DISPLAYFORM0 set each Hi to be a parent of {XA 1 ∪ XD} connect 12 return G Recall that G encodes the generative distribution of X and G Inv is the stochastic inverse. We further construct a discriminative graph G D by replacing bi-directional dependency relations in G Inv, induced by Q, with explaining-away relations by adding the observed class variable Y. Node Y is set in G D to be the common child of the leaves in G Inv (latents introduced after testing marginal independencies) (see an example in Figure 3 - [d] ). This preserves the conditional dependency relations of G Inv. That is, G D can mimic G Inv over X and H given Y (Proposition 2, Appendix A). It is interesting to note that the generative and discriminative graphs share the exact same inter-layer connectivity (inverted edge-directions). Moreover, introducing node Y provides an "explaining away" relation between latents, uniquely for the classification task at hand. We construct a neural network based on the connectivity in G D. Sigmoid belief networks BID38 have been shown to be powerful neural network density estimators BID29 BID15. In these networks, conditional probabilities are defined as logistic regressors. Similarly, for G D we may define for each latent variable H ∈ H, DISPLAYFORM0 where sigm(x) = 1/(1 + exp(−x)), X = P a(H ; G D), and (W, b) are the parameters of the neural network. BID37 proposed replacing each binary stochastic node H by an infinite number of copies having the same weights but with decreasing bias offsets by one. They showed that this infinite set can be approximated by DISPLAYFORM1 where v = W X + b. They further approximate this function by max(0, v +) where is a zerocentered Gaussian noise. Following these approximations, they provide an approximate probabilistic interpretation for the ReLU function, max(0, v). As demonstrated by BID25 and BID37, these units are able to learn better features for object classification in images. In order to further increase the representational power, we represent each H by a set of neurons having ReLU activation functions. That is, each latent variable H in G D is represented in the neural network by a dense (fully-connected) layer. Finally, the class node Y is represented by a softmax layer. We now extend the method of learning the connectivity of a single layer into a method of learning multi-layered structures. The key idea is to recursively introduce a new and deeper latent layer by testing n-th order conditional independence (n is the condition set size) and connect it to latent layers created by previous recursive calls that tested conditional independence of order n + 1. The method is described in Algorithm 2. It is important to note that conditional independence is tested only between input variables X and condition sets do not include latent variables. Conditioning on latent variables or testing independence between them is not required as the algorithm adds these latent variables in a specific manner, preserving conditional dependencies between the input variables. Algorithm 2: Recursive Latent Structure Learning (multi-layer)1 RecurLatStruct (GX, X, Xex, n) Input: an initial DAG GX over observed X & exogenous nodes Xex and a desired resolution n. Output: G, a latent structure over X and H 2 if the maximal indegree of GX (X) is below n + 1 then exit condition DISPLAYFORM0 to be a parent of {HA The algorithm maintains and recursively updates an auxiliary graph G X (a CPDAG) over X and utilizes it to construct G. BID54 introduced an efficient algorithm (RAI) for constructing a CPDAG over X by a recursive application of conditional independence tests with increasing condition set sizes (n). Our algorithm is based on this framework for updating the auxiliary graph G X (Algorithm 2, lines 5 and 6).The algorithm starts with n = 0, G X a complete graph, and a set of exogenous nodes X ex = ∅. The set X ex is exogenous to G X and consists of parents of X.The function IncreaseResolution (Algorithm 2-line 5) disconnects (in G X) conditionally independent variables in two steps. First, it tests dependency between X ex and X, i.e., X ⊥ ⊥ X |S for every connected pair X ∈ X and X ∈ X ex given a condition set S ⊂ {X ex ∪ X} of size n. Next, it tests dependency within X, i.e., X i ⊥ ⊥ X j |S for every connected pair X i, X j ∈ X given a condition set S ⊂ {X ex ∪ X} of size n. After removing the corresponding edges, the remaining edges are directed by applying two rules BID42 BID50. First, v-structures are identified and directed. Then, edges are continually directed, by avoiding the creation of new v-structures and directed cycles, until no more edges can be directed. Following the terminology of BID54, we say that this function increases the graph d-separation resolution from n − 1 to n. The function SplitAutonomous (Algorithm 2-line 6) identifies autonomous sets in a graph in two steps, as described in Algorithm 1 lines 7 and 8. An autonomous set in G X includes all its nodes' parents (complying with the Markov property) and therefore a corresponding latent structure can be constructed independently using a recursive call. Thus, the algorithm is recursively and independently called for the ancestor sets (Algorithm 2 lines 7-8), and then called for the descendant set while treating the ancestor sets as exogenous (Algorithm 2 line 9).[a] Each recursive call returns a latent structure for each autonomous set. Recall that each latent structure encodes a generative distribution over the observed variables where layer H (n+1), the last added layer (parentless nodes), is a representation of the input X ⊂ X. By considering only layer H (n+1) of each latent structure, we have the same simple scenario discussed in Section 2-learning the connectivity between H (n), a new latent layer, and H (n+1), treated as an "input" layer. Thus, latent variables are introduced as parents of the H (n+1) layers, as described in Algorithm 2 lines 11-13. A simplified example is given in Figure 4.Next, a stochastic inverse G Inv is constructed as described in Section 2-all the edge directions are inverted and bi-directional edges are added between every pair of latents sharing a common child in G. An example graph G and a corresponding stochastic inverse G Inv are given in Figure 5. A discriminative structure G D is then constructed by removing all the bi-directional edges and adding the class node Y as a common child of layer H, the last latent layer that is added (Figure 5-[c] ). Finally, a neural network is constructed based on the connectivity of G D. That is, each latent node, H ∈ H (n), is replaced by a set of neurons, and each edge between two latents, H ∈ H (n) and DISPLAYFORM1, is replaced by a bipartite graph connecting the neurons corresponding to H and H. Recent studies have focused on automating the exploration of the design space, posing it as a hyperparameter optimization problem and proposing various approaches to solve it. BID34 learns the topology of an RNN network introducing structural parameters into the model and optimize them along with the model weights by the common gradient descent methods. BID47 takes a similar approach incorporating the structure learning into the parameter learning scheme, gradually growing the network up to a maximum size. A common approach is to define the design space in a way that enables a feasible exploration process and design an effective method for exploring it. BID56 (NAS) first define a set of hyper-parameters characterizing a layer (number of filters, kernel size, stride). Then they use a controller-RNN for finding the optimal sequence of layer configurations for a "trainee network". This is done using policy gradients (REINFORCE) for optimizing the objective function that is based on the accuracy achieved by the "trainee" on a validation set. Although this work demonstrates capabilities to solve large-scale problems (Imagenet), it comes with huge computational cost. In a following work, BID57 address the same problem but apply a hierarchical approach. They use NAS to design network modules on a small-scale dataset (CIFAR-10) and transfer this knowledge to a large-scale problem by learning the optimal topology composed of these modules. BID3 use reinforcement learning as well and apply Q-learning with epsilon-greedy exploration strategy and experience replay. BID39 propose a language that allows a human expert to compactly represent a complex search-space over architectures and hyperparameters as a tree and then use methods such as MCTS or SMBO to traverse this tree. Smithson et al. FORMULA1 present a multi objective design space exploration, taking into account not only the classification accuracy but also the computational cost. In order to reduce the cost involved in evaluating the network's accuracy, they train a Response Surface Model that predicts the accuracy at much lower cost, reducing the number of candidates that go through actual validation accuracy evaluation. Another common approach for architecture search is based on evolutionary strategies to define and search the design space. BID43 BID35 use evolutionary algorithm to evolve an initial model or blueprint based on its validation performance. Common to all these recent studies is the fact that structure learning is done in a supervised manner, eventually learning a discriminative model. Moreoever, these approaches require huge compute resources, rendering the solution unfeasible for most applications given limited compute and time resources. We evaluate the quality of the learned structure in two experiments:• Classification accuracy as a function of network depth and size for a structure learned directly from MNIST pixels.• Classification accuracy as a function of network size on a range of benchmarks and compared to common topologies. All the experiments were repeated five times where average and standard deviation of the classification accuracy were recorded. In all of our experiments, we used a ReLU function for activation, ADAM BID26 for optimization, and applied batch normalization BID24 followed by dropout BID51 to all the dense layers. All optimization hyperparameters that were tuned for the vanilla topologies were also used, without additional tuning, for the learned structures. For the learned structures, all layers were allocated an equal number of neurons. Threshold for independence tests, and the number of neurons-per-layer were selected by using a validation set. Only test-set accuracy is reported. Our structure learning algorithm was implemented using the Bayesian network toolbox BID36 and Matlab. We used Torch7 BID8 and Keras BID7 with the TensorFlow BID0 back-end for optimizing the parameters of both the vanilla and learned structures. We analyze the accuracy of structures learned by our algorithm as a function of the number of layers and parameters. Although network depth is automatically determined by the algorithm, it is implicitly controlled by the threshold used to test conditional independence (partial-correlation test in our experiments). For example, a high threshold may cause detection of many independencies leading to early termination of the algorithm and a shallow network (a low threshold has the opposite effect). Thus, four different networks having 2, 3, 4, and 5 layers, using four different thresholds, are learned for MNIST. We also select three configurations of network sizes: a baseline (normalized to 1.00), and two configurations in which the number of parameters is 0.5, and 0.375 of the baseline network (equal number of neurons are allocated for each layer).Classification accuracies are summarized in TAB0. When the number of neurons-per-layers is large enough (100%) a 3-layer network achieves the highest classification accuracy of 99.07% (standard deviation is 0.01) where a 2-layer dense network has only a slight degradation in accuracy, 99.04%. For comparison, networks with 2 and 3 fully connected layers (structure is not learned) with similar number of parameters achieve 98.4% and 98.75%, respectively. This demonstrates the efficiency of our algorithm when learning a structure having a small number of layers. In addition, for a smaller neuron allocation (50%), deeper structures learned by our algorithm have higher accuracy than shallower ones. However, a decrease in the neurons-per-layer allocation has a greater impact on accuracy for deeper structures. MNIST images as a function of network depth and number of parameters (normalized). For comparison, when a structure is not learned, networks with 2 and 3 dense layers, achieve 98.4% and 98.75% accuracy, respectively (having the same size as learned structures at configuration "100%"). We evaluate the quality of learned structures using five image classification benchmarks. We compare the learned structures to common topologies (and simpler hand-crafted structures), which we call "vanilla topologies", with respect to network size and classification accuracy. The benchmarks and vanilla topologies are described in Table 2. In preliminary experiments we found that, for SVHN and ImageNet, a small subset of the training data is sufficient for learning the structure (larger training set did not improve classification accuracy). As a , for SVHN only the basic training data is used (without the extra data), i.e., 13% of the available training data, and for ImageNet 5% of the training data is used. Parameters were optimized using all of the training data. Table 2: Benchmarks and vanilla topologies. MNIST-Man and SVHN-Man topologies were manually created by us. MNIST-Man has two convolutional layer (32 and 64 filters each) and one dense layer with 128 neurons. SVHN-Man was created as a small network reference having reasonable accuracy compared to Maxout-NiN. In the first row we indicate that in one experiment a structure for MNIST was learned from the pixels and feature extracting convolutional layers were not used. Convolutional layers are powerful feature extractors for images exploiting domain knowledge, such as spatial smoothness, translational invariance, and symmetry. We therefore evaluate our algorithm by using the first convolutional layers of the vanilla topologies as "feature extractors" (mostly below 50% of the vanilla network size) and learning a deep structure from their output. That is, the deepest layers of the vanilla network (mostly over 50% of the network size) is removed and replaced by a structure learned by our algorithm in an unsupervised manner. Finally, a softmax layer is added and the entire network parameters are optimized. First, we demonstrate the effect of replacing a different amount of the deepest layers and the ability of the learned structure to replace feature extraction layers. Table 3 describes classification accuracy achieved by replacing a different amount of the deepest layers in VGG-16. For example, column "conv.10" represents learning a structure using the activations of conv.10 layer. Accuracy and the normalized number of network parameters are reported for the overall network, e.g., up to conv.10 + the learned structure. Column "vanilla" is the accuracy achieved by the VGG-16 network, after training under the exact same setting (a setting we found to maximize a validation-set accuracy for the vanilla topologies). Table 3: Classification accuracy (%) and overall network size (normalized number of parameters). VGG-16 is the "vanilla" topology. For both, CIFAR 10/100 benchmarks, the learned structure achieves the highest accuracy by replacing all the layers that are deeper than layer conv.10. Moreover, accuracy is maintained when replacing the layers deeper than layer conv.7.One interesting phenomenon to note is that the highest accuracy is achieved at conv. 10 rather than at the "classifier" (the last dense layer). This might imply that although convolutional layers are useful at extracting features directly from images, they might be redundant for deeper layers. By using our structure learning algorithm to learn the deeper layers, accuracy of the overall structure increases with the benefit of having a compact network. An accuracy, similar to that of "vanilla" VGG-16, is achieved with a structure having 85% less total parameters (conv. 7) than the vanilla network, where the learned structure is over 50X smaller than the replaced part. Next, we evaluate the accuracy of the learned structure as a function of the number of parameters and compare it to a densely connected network (fully connected layers) having the same depth and size. For SVHN, we used the Batch Normalized Maxout Network in Network topology BID4 and removed the deepest layers starting from the output of the second NiN block (MMLP-2-2). For CIFAR-10, we used the VGG-16 and removed the deepest layers starting from the output of conv.10 layer. For MNIST, a structure was learned directly from pixels. Results are depicted in FIG6. It is evident that accuracy of the learned structures is significantly higher (error bars represent 2 standard deviations) than a set of fully connected layers, especially in cases where the network is limited to a small number of parameters.[a] Finally, in Table 4 we provide a summary of network sizes and classification accuracies, achieved by replacing the deepest layers of common topologies (vanilla) with a learned structure. In the first row, a structure is learned directly from images; therefore, it does not have a "vanilla" topology as reference (a network with 3 fully-connected layers having similar size achieves 98.75% accuracy). In all the cases, the size of the learned structure is significantly smaller than the vanilla topology, and generally has an increase in accuracy. Comparison to other methods. Our structure learning algorithm runs efficiently on a standard desktop CPU, while providing structures with competitive classification accuracies and network sizes. For example, the lowest classification error rate achieved by our unsupervised algorithm for CIFAR 10 is 4.58% with a network of size 6M (WRN-40-4 row in Table 4). For comparison, the NAS algorithm BID56 achieves error rates of 5.5% and 4.47% for networks of sizes 4.2M and 7.1M, respectively, and requires optimizing thousands of networks using hundreds of GPUs. For AlexNet network, recent methods for reducing the size of a pre-trained network (pruning while maintaining classification accuracy) achieve 5× and 9× BID18 BID34 Table 4: A summary of network sizes and classification accuracies (and standard deviations), achieved by replacing the deepest layers of common topologies (vanilla) with a learned structure. The number of parameters are reported for "feature extraction" (first layers of the vanilla topology), removed section (the deepest layers of the vanilla topology), and the learned structure that replaced the removed part. The sum of parameters in the "feature extraction" and removed parts equals to the vanilla topology size. The first row corresponds to learning a structure directly from image pixels. We presented a principled approach for learning the structure of deep neural networks. Our proposed algorithm learns in an unsupervised manner and requires small computational cost. The ing structures encode a hierarchy of independencies in the input distribution, where a node in one layer may connect another node in any deeper layer, and depth is determined automatically. We demonstrated that our algorithm learns small structures, and maintains high classification accuracies for common image classification benchmarks. It is also demonstrated that while convolution layers are very useful at exploiting domain knowledge, such as spatial smoothness, translational invariance, and symmetry, they are mostly outperformed by a learned structure for the deeper layers. Moreover, while the use of common topologies (meta-architectures), for a variety of classification tasks is computationally inefficient, we would expect our approach to learn smaller and more accurate networks for each classification task, uniquely. As only unlabeled data is required for learning the structure, we expect our approach to be practical for many domains, beyond image classification, such as knowledge discovery, and plan to explore the interpretability of the learned structures. | A principled approach for structure learning of deep neural networks with a new interpretation for depth and inter-layer connectivity. | 1,220 | scitldr |
L1 and L2 regularizers are critical tools in machine learning due to their ability to simplify solutions. However, imposing strong L1 or L2 regularization with gradient descent method easily fails, and this limits the generalization ability of the underlying neural networks. To understand this phenomenon, we investigate how and why training fails for strong regularization. Specifically, we examine how gradients change over time for different regularization strengths and provide an analysis why the gradients diminish so fast. We find that there exists a tolerance level of regularization strength, where the learning completely fails if the regularization strength goes beyond it. We propose a simple but novel method, Delayed Strong Regularization, in order to moderate the tolerance level. Experiment show that our proposed approach indeed achieves strong regularization for both L1 and L2 regularizers and improves both accuracy and sparsity on public data sets. Our source code is published. Regularization has been very common for machine learning to prevent over-fitting and to obtain sparse solutions. Deep neural networks (DNNs), which have shown huge success in many tasks such as computer vision BID9 BID15 BID5 and speech recognition, often contain a number of parameters in multiple layers with non-linear activation functions, in order to gain enough expressive power. However, DNNs with many parameters are often prone to over-fitting, so the need for regularization has been emphasized. While new regularization techniques such as dropout BID16 and pruning BID2 have been proposed to solve the problem, the traditional regularization techniques using L1 or L2 norms have cooperated with them to further improve the performance significantly. L1 regularization, often called Lasso BID17, obtains sparse solutions so that the required memory and power consumption are reduced while keeping reasonable accuracy. On the other hand, L2 regularization smooths the parameter distribution and reduces the magnitude of parameters, so the ing solution is simple (i.e., less prone to over-fitting) and effective. Indeed, our empirical show that applying strong L2 regularization to the deep neural networks that already has dropout layers can reduce the error rate by up to 24% on a public data set. Strong regularization is especially desired when the model contains too many parameters for the given amount of training data. This is often the case for deep learning tasks in practice because DNNs often contain millions of parameters while labeled training data set is limited and expensive. However, imposing strong L1 or L2 regularization on DNNs is difficult for gradient descent method due to the vanishing gradient problem. If we impose too strong regularization, the gradient from regularization becomes dominant, and DNNs stop learning. In this paper, we first study the interesting phenomenon that strong regularization fails in learning. We also provide an analysis why the gradients diminish so quickly that learning completely fails. Then, we propose a simple yet effective solution, Delayed Strong Regularization, which carries a time-dependent schedule of regularization strength. We find that we can overcome the failure in learning by waiting for the model to reach an "active learning" phase, where the gradients' magnitudes are significant, and then enforcing strong regularization. Delayed Strong Regularization enables us to obtain the superior performance that is otherwise hidden by learning failure in deep networks. The proposed approach is general and does not require any additional computation. The experiment indicate that the proposed approach indeed achieves strong regularization, consistently yielding even higher accuracy and higher compression rate that could not be achieved. 2.1 Let us denote a generic DNN by y = f (x; w) where x ∈ R d is an input vector, w ∈ R n is a flattened vector of all parameters in the network f, and y ∈ R c is an output vector after feed-forwarding x through multiple layers in f. The network f is trained by finding optimal set of w by minimizing the cost function within the training data DISPLAYFORM0 as follows. DISPLAYFORM1 where L is the loss function, which is usually cross-entropy loss for classification tasks. Here, the regularization term λΩ(w) is added to simplify the solution, and λ is set to zero for non-regularized cost function. A higher value of λ means that stronger regularization is imposed. The most commonly used regularization function is a squared L2 norm: Ω(w) = ||w|| 2 2, which is also called as weight decay in deep learning literature. This L2 regularizer has an effect of reducing the magnitude of the parameters w, and the simpler solution becomes less prone to over-fitting. On the other hand, the L1 regularizer Ω(w) = ||w|| 1 is often employed to induce sparsity in the model (i.e., make a portion of w zero). The sparse solution is often preferred to reduce computation time and memory consumption for deep learning since DNNs often require heavy computation and big memory space. With the gradient descent method, each model parameter at time t, w (t) i, is updated with the following formula: DISPLAYFORM2 where α is a learning rate. As L1 norm is not differentiable at 0, the formula doesn't have value when w (t) i = 0, but in practice, the subgradient 0 is often used. Please see Section 2.3 for more details. From the formula, we can see that L2 regularizer continuously reduces the magnitude of a parameter proportionally to it while L1 regularizer reduces the magnitude by a constant. In both regularizers, strong regularization thus means greatly reducing the magnitude of parameters. Strong regularization is especially useful for deep learning because the DNNs often contain a large number of parameters while the training data is limited in practice. However, we have observed a phenomenon where learning suddenly fails when strong regularization is imposed for gradient descent method, which is the most commonly used solver for deep learning. The example of the phenomenon is depicted in Figure 1. In the example, the architectures VGG-16 BID15 and AlexNet BID9 were employed for the data set CIFAR-100 BID8.2 As shown, the accuracy increases as we enforce more regularization. However, it suddenly drops to 1.0% after enforcing a little more regularization, which means that the model entirely fails to learn. The depicted training loss also indicates that it indeed learns faster with stronger regularization (λ = 1 × 10 −3), but the training loss does not improve at all when even stronger regularization is imposed (λ = 2 × 10 −3).In order to look at this phenomenon in more detail, we show how gradients and their proportion change in Figure 2. As depicted in Figure 2a −3 ) follows a path that has a relatively steep slope during the first 150 epoch, and then it converges with a gentle slope. However, a model with a little stronger L2 regularization (λ = 2 × 10 −3) does not follow a path that has a good slope, so it does not really have a chance to learn from gradients. A close-up view of this in the first 20 epochs is depicted in Figure 2b. The models with moderate L1 and L2 regularization seem to follow a good path in a couple of epochs. Through following the good path, the models keep the proportion of the gradients from L to all gradients dominant, especially for the first 150 epochs (Figure 2c). On the other hand, the models with a little stronger regularization fail to follow such path and the gradients from L decrease exponentially (Figure 2b). Since the magnitude of gradients from L decreases faster than that from Ω, the proportion of the latter to all gradients becomes dominant (Figure 2c), and it in failure in learning. From this observation, we can see that there exists a tolerance level of regularization strength, which decides success or failure of entire learning. Why does the magnitude of the gradient from L decrease so fast? It is not difficult to see why the magnitude of ∂L ∂wi decreases so fast when the regularization is strong. In deep neural networks, the gradients are dictated by back-propagation. It is well known that the gradients at the l th layer are given by DISPLAYFORM0 where a (l−1) is the output of the neurons at the (l − 1) th layer and δ (l) is the l th -layer residual which follows the recursive relation DISPLAYFORM1 where ⊙ and a ′ denote the element-wise multiplications and derivatives of the activation function respectively. Using the recursive relation, we obtain DISPLAYFORM2 If the regularization is too strong, the weights would be significantly suppressed as shown in Figure 5b. From, since the gradients are proportional to the product of the weights at later layers (whose magnitudes are typically much less than 1 for strong regularization), they are even more suppressed. In fact, the suppression is more severe than what we have deduced above. The factor a (l−1) in could actually lead to further suppression to the gradients when the weights are very small, for the following reasons. First of all, we use ReLU as the activation function and it could be written as DISPLAYFORM3 where Θ(x) is the Heaviside step function. Using this, we could write DISPLAYFORM4 Applying FORMULA7 recursively, we can see that a (l−1) is proportional to the product of the weights at previous layers. Again, when the weights are suppressed by strong regularization, a (l−1) would be suppressed correspondingly. Putting everything together, we can conclude that in the presence of strong regularization, the gradients are far more suppressed than the weights. Strictly speaking, the derivations above are valid only for fully-connected layers. For convolutional layers, the derivations are more complicated but similar. Our above would still be valid. Normalization Normalization techniques such as batch normalization BID7 and weight normalization BID13 can be possible approaches to prevent the L gradients from diminishing quickly. However, it has been shown that L2 regularization has no regularizing effect when combined with normalization but only influences on the effective learning rate, ing in good performance BID18. In other words, the normalization techniques do not really simplify the solution as the decrease of parameter magnitude is canceled by normalization. This does not meet our goal, which is to heavily simplify solutions to reduce over-fitting, so we propose an approach that meets our goal. Since we have seen that stronger regularization can in better performance in Figure 1, we propose a method that is able to accommodate strong regularization. Specifically, we introduce a time-dependent regularization strength, λ t, to the equation FORMULA2, and it is defined as DISPLAYFORM0 where epoch(t) gets the epoch number of the time step t, and γ is a hyper-parameter that is set through cross-validation. The formula means that we do not impose any regularization until γ th epoch, and then impose the strong regularization in each training step. The underlying hypothesis is that once the model follows a good learning path, i.e., the gradient from L is big enough, it won't easily change its direction because of the steep slope, and thus, it can learn without failure. We empirically verify our hypothesis in the experiment section. The hyper-parameter γ is relatively easy to set because the models often follow the good path in a couple of epochs, and once they follow such path, learning does not fail. We recommend using 2 ≤ γ ≤ 20. Please note that our approach is different from imposing a slightly weaker regularization throughout the whole training. The reduced amount by not skipping regularization for the first few epochs is negligible compared to the total reduced amount by regularization. In addition, we empirically show that our approach can achieve a much higher sparsity than the baseline in the parameter space. The proposed method is easy to implement, and the hyper-parameter is easy to set. Also, the method is very close to the traditional regularization method so that it inherits the traditional one's good performance for non-strong regularization while it also achieves strong regularization. Although the method is very simple, we found that it shows the best accuracy among the approaches we tried in our preliminary experiments while it is the simplest. The preliminary experiments are further discussed in Appendix B.Proximal gradient algorithm for L1 regularizer Meanwhile, since L1 norm is not differentiable at zero, we employ the proximal gradient algorithm BID11, which enables us to obtain proper sparsity (i.e., guaranteed convergence) for non-smooth regularizers. We use the following update formulae: DISPLAYFORM1 where S is a soft-thresholding operator. Basically, the algorithm assigns zero to a parameter if its next updated value is smaller than αλ. In other cases, it just decreases the magnitude of the parameter as usual. We first evaluate the effectiveness of our proposed method with popular architectures, AlexNet BID9 and VGG-16 BID15 on the public data sets CIFAR-10 and CIFAR-100 BID8 ). Then, we employ variations of VGG on another public data set SVHN BID10, in order to see the effect of the number of hidden layers on the tolerance level. Please note that we do not employ architectures that contain normalization techniques such as batch normalization BID7, for the reason described in Section 2.2. The data set statistics are described in TAB0. VGG-11 and VGG-19 for SVHN contain 9.8 and 20.6 millions of parameters. Regularization is applied to all network parameters except bias terms. We use PyTorch 3 framework for all experiments, and we use its official computer vision library 4 for the implementations of the networks. In order to accommodate the data sets, we made some modifications to the networks. The kernel size of AlexNet's max-pooling layers is changed from 3 to 2, and the first convolution layer's padding size is changed from 2 to 5. All of its fully connected layers are modified to have 256 neurons. For VGG, we modified the fully connected layers to have 512 neurons. The output layers of both networks have 10 neurons for CIFAR-10 and SVHN, and 100 neurons for CIFAR-100. The networks are learned by stochastic gradient descent algorithm with momentum of 0.9. The parameters are initialized according to BID4. The batch size is set to 128, and the initial learning rate is set to 0.05 and decays by a factor of 2 every 30 epochs during the whole 300-epoch training. In all experiments, we set γ = 5. We did not find significantly different for 2 ≤ γ ≤ 20. Please note that we still use drop out layers (with drop probability 0.5) and pre- AlexNet and VGG-16 are experimented for different regularization methods (L1 and L2) and different data sets (CIFAR-10 and CIFAR-100), yielding 8 combinations of experiment sets. Then, VGG-11, VGG-16, and VGG-19 are experimented for L1 and L2 regularization methods on SVHN, yielding 6 experiment sets. For each experiment set, we set the baseline method as the one with well-tuned L1 or L2 regularization but without our time-dependent regularization strength. For each regularization, we try more than 10 different values of λ, and for each value, we report average accuracy of three independent runs and report 95% confidence interval. We perform statistical significance test (t-test) for the improvement over the baseline method and report the p-value. We also report sparsity of each trained model, which is the proportion of the number of zero-valued parameters to the number of all parameters. Please note that we mean the sparsity by the one actually derived by the models, not by pruning parameters with threshold after training. The experiment by VGG-16 are depicted in FIG1. As we investigated in Section 2.2, the baseline method suddenly fails beyond certain values of tolerance level. However, our proposed method does not fail for higher values of λ. As a , our model can achieve higher accuracy as well as higher sparsity. In practice, L2 regularization is used more often than L1 regularization due to its superior performance, and this is true for our VGG-16 experiments too. Using L2 regularization, our model improves the model without L1 or L2 regularization but with dropout, by 14.4% in accuracy, which is about 24% of error rate improvement. Tuning L2 regularization parameter is difficult as the curves have somewhat sharp peak, but our proposed method ease the problem to some extent by preventing the sharp drop. Our L1 regularizer obtains much better sparsity for the similar level of accuracy FIG1 ), which means that strong regularization plays an important role in compressing neural networks. The improvement is more prominent on CIFAR-100 than on CIFAR-10, and we think this is because over-fitting can more likely occur on CIFAR-100 as there are less images per class than on CIFAR-10.The experiment by AlexNet are depicted in FIG2. Again, our proposed method achieves higher accuracy and sparsity in general. Unlike VGG-16, we obtain more improvement over baseline with L1 regularization than with L2 regularization. In addition, the curves make sharper peaks than those by VGG-16 especially for the sparsity regularizer (L1). Interestingly, our proposed method often obtains higher accuracy even when the baseline does not fail on CIFAR-10, and this is only prominent when the regularization strength is relatively strong (better shown in FIG1). This may be because avoiding strong regularization in the early stage of training can help the model to explore more spaces freely, and the better exploration in finding superior local optima. The overall experiment are shown in TAB1. It shows that there is more performance improvement by L1/L2 regularization on VGG-16 than on AlexNet, which is reasonable since VGG-16 contains about 6 times more parameters so that it is more prone to over-fitting. Our proposed model always improves the baselines by up to 3.89%, except AlexNet with L1 regularization on CIFAR-10, and most (6 out of 7) improvements are statistically significant (p¡0.05). Our L1 regularization models always obtain higher sparsity with compression rate up to 4.2× than baselines, meaning that our model is promising for compressing neural networks. We also show in Figure 5 how gradients and weights change when our method and the baseline are applied. We hypothesized that if the model reaches an "active learning" phase with an elevated gradient amount, it does not suffer from vanishing gradients any more even when strong regularization is enforced. The Figure 5a shows that our model indeed reaches there by skipping strong regularization for the first five epochs, and and it keeps learning even after strong regularization is enforced. In Figure 5b, although the same strong regularization is enforced since epoch 5, the magnitude of weights in our model stops decreasing around epoch 20, while that in baseline (green dotted line) keeps decreasing towards zero. This means that our model can cope with strong regularization, and it maintains its equilibrium between gradients from L and those from regularization. The analysis in Section 2.2 implies that the number of hidden layers would affect the tolerance level when strong regularization is imposed. That is, if there are more hidden layers in the neural network architecture, the learning will fail more easily by strong regularization. In order to check the hypothesis empirically, we employ variations of the VGG architecture, i.e., which contain 11, 16, and 19 hidden layers, respectively. We experiment them on the SVHN data set. The by L2 regularization are depicted in Figure 6. As shown, the peak of our method's performance is formed around λ = 1 × 10 −3. As more hidden layers are added to the network, the tolerance level where the performance suddenly drops by the baseline is shifted to left, as hypothesized by our analysis. The by L1 regularization are in Appendix A, and it is shown that VGG-19 more easily fails as the parameters become more sparse. The overall experiment are shown in TAB2. As the method without L1/L2 regularization already performs well on this data set and there are relatively many training images per class, the improvement by L1/L2 regularization is not big. Our method still outperforms the baseline in all experiments (6 out of 6), but the improvement is less statistically significant compared to CIFAR-10 and CIFAR-100 data sets. The compression rate is especially good for VGG-19 mainly because its tolerance level is low so that the baseline can only achieve low sparsity. The related work is partially covered in Section 1, and we extend other related work here. It has been shown that L2 regularization is important for training DNNs BID9 BID1. Although there has been a new regularization method such as dropout, L2 regularization has been shown to reduce the test error effectively when combined with dropout BID16. Meanwhile, L1 regularization has also been used often in order to obtain sparse solutions. To reduce computation and power consumption, L1 regularization and its variations such as group sparsity regularization has been promising for deep neural networks BID19 BID14 ). However, for both L1 and L2 regularization, the phenomenon that learning fails with strong regularization has not been emphasized previously. BID0 showed that tuning hyper-parameters such as L2 regularization strength can be effectively done through random search instead of grid search, but they did not study how and why learning fails or how strong regularization can be successfully achieved. visualized activations to understand deep neural networks and showed that strong L2 regularization fails to learn. However, it was still not shown how and why learning fails and how strong regularization can be achieved. To the best of our knowledge, there is no existing work that is dedicated to studying the phenomenon that learning fails with strong regularization and to proposing a method that can avoid the failure. In this work, we studied the problem of achieving strong regularization for deep neural networks. Strong regularization with gradient descent algorithm easily fails for deep neural networks, but few work addressed this phenomenon in detail. We provided investigation and analysis of the phenomenon, and we found that there is a strict tolerance level of regularization strength. To avoid this problem, we proposed a novel but simple method: Delayed Strong Regularization. We performed experiments with fine tuning of regularization strength. Evaluation show that our model successfully achieves strong regularization on deep neural networks, verifying our hypothesis that the model will keep learning once it reaches an "active learning" phase, with strong regularization, our model obtains higher accuracy and sparsity, the number of hidden layers in neural networks affects the tolerance level, and L1/L2 regularization is difficult to tune, but it can yield great performance boost when tuned well. There are limitations in this work. Our proposed method can be especially useful when strong regularization is desired. For example, deep learning projects that cannot afford a huge labeled data set can benefit from our method. However, strong regularization may not be necessary in some other cases where the large labeled data set is available or the networks do not contain many parameters. In addition, our experiments were not performed on a bigger data set such as ImageNet data set. We need to fine-tune the models with different regularization parameters, and we also need multiple training sessions of each model to obtain confidence interval. For example, the experiment in FIG1 and 4 include 750 training sessions in total. This is something we cannot afford with ImageNet data set, which requires several weeks of training for EACH session (unless we have GPU clusters). Our approach cannot be applied to architectures containing normalization techniques for the reason in Section 2.2. We actually tried to intentionally exclude normalization part from Residual Networks BID5 ) and train the model to see if we can apply our method to non-normalized Residual Networks. However, we could not control the exploding gradients caused by the exclusion of normalization. Our work can be further extended in several ways. Since our model can achieve strong regularization, it will be interesting to see how the strongly regularized model performs if combined with pruning-related methods BID2. We applied our approach to only L1 and L2 regularizers, but applying it to other regularizers such as group sparsity regularizers will be promising as they are often employed for DNNs to compress networks. Lastly, our proposed Delayed Strong Regularization is very simple, so one can easily extend it to more complicated methods. All these directions are left as our future work. To empirically check the effect of the number of hidden layers on the tolerance level, we experimented variations of VGG on SVHN, and we showed the by L2 regularizer in Section 3.2. Here, we show the by L1 regularizer in FIG4. As more hidden layers are included to the network, the tolerance level where the baseline method suddenly fails is shifted to left. Such pattern in baseline method is more clearly shown in the accuracy vs. sparsity plots. VGG-19 fails to learn even when it loses only 27% of its parameters, whereas VGG-11 can still learn after losing 84% of its parameters. The reason why we proposed a very simple method is that it is effective while it is simple to implement. The only additional hyper-parameter, which is the number of initial epochs to skip regularization, is also not difficult to set. We think that the proposed method is very similar to the traditional regularization method so that it inherits the traditional one's good performance for non-strong regularization while it also achieves strong regularization. We actually tried a couple more approaches other than the proposed one in our preliminary experiments. We found that the proposed one shows the best accuracy among the approaches we tried while it is the simplest. For example, we tried an approach that can be regarded as a warm-start strategy. It starts with the regularization parameter λ t = 0, and then it gradually increases λ t to λ for γ epochs, where γ >= 0 and it is empirically set. We found that it can achieve strong regularization, but its best accuracy is similar to or slightly lower than that of our proposed approach. We also tried a method that is similar to Ivanov regularization BID12. In this method, the regularization term is applied only when the L1 norm of the weights is greater than a certain threshold. To enforce strong regularization, we set λ just above the tolerance level that is found by the baseline method. However, this method did not accomplish any learning. The reason is that, to reach the level of L1 norm that is low enough, the model needs to go through the strong regularization for the first few epochs, and the neurons already lose its learning ability during this period like the baseline method. If we set λ below the tolerance level, it cannot reach the desired L1 norm without strong regularization, and thus the performance is inferior to our proposed method. Meanwhile, an approach that applies strong regularization first and then continuously reduces the regularization strength is used in sparse learning for convex optimization. This approach is opposite to our approach in that ours avoids strong regularization for the first few epochs and then apply strong regularization afterwards. We performed a simple experiment with VGG-16 on CIFAR-100 to see if the approach can perform well for deep neural networks. We set the initial regularization parameter λ = 2 × 10 −3 and λ = 6 × 10 −5 for L2 and L1 regularization, respectively, which are just above the "tolerance level". Then, we continuously reduced λ t to zero throughout the training session. The trained models didn't show any improvement over "random guess", which means that they were not able to learn. Once the strong regularization is enforced in the beginning, the magnitudes of weights decrease quickly. This in turn drives the magnitudes of gradients to diminish exponentially in deep neural networks as explained in Section 2.2, and thus, the model loses its ability to learn after a short period of strong regularization. | We investigate how and why strong L1/L2 regularization fails and propose a method than can achieve strong regularization. | 1,221 | scitldr |
Despite neural network’s high performance, the lack of interpretability has been the main bottleneck for its safe usage in practice. In domains with high stakes (e.g., medical diagnosis), gaining insights into the network is critical for gaining trust and being adopted. One of the ways to improve interpretability of a NN is to explain the importance of a particular concept (e.g., gender) in prediction. This is useful for explaining reasoning behind the networks’ predictions, and for revealing any biases the network may have. This work aims to provide quantitative answers to \textit{the relative importance of concepts of interest} via concept activation vectors (CAV). In particular, this framework enables non-machine learning experts to express concepts of interests and test hypotheses using examples (e.g., a set of pictures that illustrate the concept). We show that CAV can be learned given a relatively small set of examples. Testing with CAV, for example, can answer whether a particular concept (e.g., gender) is more important in predicting a given class (e.g., doctor) than other set of concepts. Interpreting with CAV does not require any retraining or modification of the network. We show that many levels of meaningful concepts are learned (e.g., color, texture, objects, a person’s occupation), and we present CAV’s \textit{empirical deepdream} — where we maximize an activation using a set of example pictures. We show how various insights can be gained from the relative importance testing with CAV. Neural networks (NNs) are capable of impressively good performance, yet understanding and interpreting their behavior remains a significant challenge. Solving this challenge is an important problem for several reasons. For example, explaining a system's behavior may be necessary to establish acceptability and see adoption for critical applications, such as those in the medical domain. For scientists and engineers, any greater understanding of how neural networks function is appreciated, since it may lead to better models and help with debugging (30; 19).Recent work suggests that linear combinations of neurons may encode meaningful, insightful information (2; 19; 27). However, we lack methods to 1) identify which linear combinations (if any) relate to a given concept, and 2) how these can aid in our quantitative understanding of concepts and classification decisions. For example, we may hypothesize that an image model that successfully classifies zebras may naturally encode concepts for'stripe' and'animal', somewhere in its internal representations, using a linear combination of neurons. How can we formalize this notion, and test such a hypothesis?Neural networks build internal representations that are far richer than the input features or output classes explicit in their training data. Unfortunately, many machine learning interpretation methods provide only in terms of input features. For example, the learned coefficients in linear classifiers or logistic regression can be interpreted as each feature's classification importance. Similar first-order importance measures for neural networks often use first derivatives as a proxy for input feature importance, as is done for pixel importance in saliency maps (8; 22).It is critical that model understanding and interpretation not be limited to only the concepts explicit in training data. This can be seen by considering classification fairness-an increasingly relevant, difficult problem where interpretability can be useful-and noting that no input features may identify discriminated-against groups. For example, the Inception model BID24 has an output class for'doctor' but no input features identifying the concepts of'man' or'woman' in a way that would allow existing interpretability approaches to quantify gender bias in classification. This work introduces the method of concept activation vectors (CAV) for the following purposes. First, CAV can be used to identify linear combinations of neurons in a layer of a model that correspond to given semantic concepts, even for new, user-provided concepts not explicit in the model's training data. Second, CAV provides quantitative measures of the relative importance of userprovided concepts, which allows for hypothesis testing of the relationship between given concepts and the model's predictions. Testing with CAV (TCAV) is designed with the following desiderata in mind.1. accessibility: Requires little to no user expertise in machine learning. 2. customization: Adapt to any concept of interest (e.g., gender) on the fly without pre-listing a set of concepts before training. 3. plug-in readiness: Work without retraining or modifying the model. BID2. quantification: Provide quantitative explanation that are tied to human-relatable concept, and not input features. One of key ideas for TCAV is that we can test the relative importance between small set of concepts, rather than ranking the importance of all possible features/concepts. For example, we can gain insights by testing whether the concept of gender was used more than the'wearing scrubs' concept for the classification of doctor. We can also test whether or not a given concept was relevant to the classification of a certain class. Similar forms of sparsity (i.e., only considering a few concepts at a time) are used in many existing interpretable models (12; 7; 28; 31; 29; 4). Note that interpretability does not mean understanding the entire network's behavior on every feature/concept of the input BID4. Such a goal may not be achievable, particularly for ML models with super-human performance BID21.TCAV satisfies these desiderata-accessibility, customization, plug-in readiness and quantification -it enables quantitative relative importance testing for non-ML experts, for user-provided concepts without retraining or modifying the network. Users express their concepts of interest using examples-a set of data points exemplifying the concept. For example, if gender is the concept of interest, users can collect pictures of women. The use of examples has been shown to be powerful medium for communication between machine learning (ML) models and non-expert users (16; 12; 13). Cognitive studies on experts also support this approach (e.g., experts think in terms of examples BID13).The structure of this paper is as follows: Section 2 relates this work to existing interpretability methods. Section 3 explains the details of the TCAV method. In Section 4, we show 1) how this framework can be used to identify semantically meaningful directions in a layer and 2) the relative importance testing that measure the relevance of concepts of interest to the classification output by the network. In this section, we provide a brief overview of existing related interpretability methods and their relation to our desiderata. We also discuss the need and the challenges of desiderata 3): plug-in readiness. One of the most popular approaches in interpreting NN is saliency methods (24; 22; 25; 8; 5). These techniques seek to identify regions of the input most relevant to the classification output by the network. Qualitatively, these methods often successfully label regions of the input which seem semantically relevant to the classification. Unfortunately, these methods do not satisfy our desiderata of 2) customization and 4) quantification. Recent work has also demonstrated that the saliency map these methods produced may be very sensitive to completely uninteresting properties of the data distribution. In particular, BID12 showed that simply applying a mean shift to the dataset may cause some saliency methods to in significant changes in the given explanation. also showed that saliency methods can be easily tricked. There are techniques, such as DeepDream, which can be used to visualize patterns that maximally activates each neuron of a neural network. The technique starts from an image of random noise and iteratively modifies the image in order to maximally activate a neuron or a linear combination of neurons of interest (17; 18). This technique has offered some insights into the information encoded in a neuron's activation. This technique also has opened up opportunities for AI-aided art BID15.However, the DeepDream method does not satisfy our desiderata 1) accessibility, 2) customization, and 4) quantification. It does not satisfy 1) because in order to apply it a user must first understand what a neuron is, and second be familiar enough with the the internal architecture in order to choose which neurons to visualize. It does not satisfy 2) because no current method exists to find which neurons correspond to semantically meaningful concepts such as gender, and it is unclear whether or not such a neuron even exists. It does not satisfy 4) because we do not understand these pictures and there is currently no method to quantify how these pictures relate to the output of the network. This method again does not provide actionable insight. As we show later, DeepDream may be combined with TCAV in order to identify and visualize interesting directions in a layer. Prior work on DeepDream has typically chosen neurons or linear combinations of neurons at random to visualize. Note that a user of TCAV does need to pick a layer in the network for which to apply TCAV to. However, if only the final prediction is concerned, the last layer can be used by default. To achieve interpretability, we have two options: restrict ourselves to inherently interpretable models or post-process our models in way that yields insights. Users may choose option as there are a number of methods to build inherently interpretable models (12; 7; 28; 31; 29; 4). If the users are willing and are able to adapt these models, then this is the gold standard. Although building inherently interpretable machine learning models may be possible for some domains, doing so may in decreased performance. Furthermore, changing the model may be costly for users who already have a model implemented. A method that can be applied without retraining or modifying the network could be instantly used to interpret existing models. Increasingly, attention is turning to the importance of providing explanations for ML decisions (for one example, consider). As a , there is a growing need for interpretation techniques that can be applied "out of the box," that is, without rebuilding or retraining existing systems. One of many challenges of building a post-processing interpretion method is to ensure that the explanation is truthful to the model's behavior. By "truthful" we mean that explanations are roughly consistent with model's internal state. For example, if the explanations are created completely independently of the model FORMULA0, it has high probability of having such inconsistencies. The plug-in readiness desiderata poses interesting challenge for explanations to remain consistent with the model's behavior. Recently, there has been work showing that saliency methods contains such inconsistencies. For instance, BID12 show that saliency methods are vulnerable to constant shift in input that does not affect classification. It has also shown that the methods can be easily tricked BID7.One way to improve consistency between explanations and the model's reasoning is to use the generated explanation as an input, and check the network's output for validation. This is typically used in perturbation-based interpretability methods (16; 20). These methods perturb the input, and use the network's response to generate explanations. They maintain the consistency either locally or globally 1 by construction. TCAV is a type of perturbation method. Even a truthful explanation may be misleading if it is only locally truthful BID18. For example, since the importance of features only needs to be truthful in the vicinity of the data point of interest, there is no guarantee that the method will not generate two completely conflicting explanations. These inconsistencies may in decreased user trust at a minimum. On the other hand, making a globally truthful explanation may be difficult as the networks decision boundaries may be complex. TCAV produces globally explanations, and uses model's output to generate explanations to maintain consistency between explanations and the model's reasoning. We introduce a method that allows for global linear interpretability of a highly flexible class of models, namely deep feedforward neural networks trained on classification tasks. As a form of explanation, TCAV uses concepts that are provided by users, instead of using predetermined set of input features or input classes. These concepts are tied to real-world data that represents concepts of interest. Users express their concepts of interest using examples -a set of data points exemplifying the concept. These concepts enable testing the relative importance of concepts used in classification. Informally, the key idea is the following: although feedforward networks learn highly nonlinear functions there is evidence that they work by gradually disentangling concepts of interest, layer by layer (2; 3). It has also been shown that representations are not necessarily contained in individual neurons but more generally in linear combinations of neurons (19; 27). Thus the space of neuron activations in layers of a neural network may have a meaningful global linear structure. Furthermore, if such a structure exists, we can uncover it by training a linear classifier mapping the representation in a single layer to a human selected set of concepts. We now formalize this intuition. First, we formally define a concept activation vector. Let us imagine that an analyst is interested in a given concept C (e.g., striped textures) and has gathered two sets of data points, P C and N, that represent positive and negative examples of this concept (say, photos of striped objects, versus a set of random photos). There is a lot of flexibility in how to choose N, we often choose a random set of images of the same size as P C. Here we represent the input to the network as vector in R n, and P C, N ⊂ R n. Consider a layer l of the feedforward network consisting of m neurons. Then running inference on an input example and looking at the activations at layer l yields a function f l: R n → R m. For a set of inputs X ⊆ R n we denote by f l (X) to be the set of layer activations {f l (x): x ∈ X}. Note that for convolutional networks we view a layer in it's flattened form, thus a layer of width w, height h, and c channels becomes a flat vector of m = w × h × c activations. The two sets P C and N then naturally give rise to two sets of activation vectors in layer l, namely f l (P C) and f l (N). We can then train a linear classifier on the binary classification task of distinguishing between these two sets. The weights of this classifier are an element v l C ∈ R m. We call this the concept activation vector for the concept C.A variation on this idea is a relative concept activation vector. Here the analyst selects two sets of inputs that represent two different concepts, C and D. Training a classifier on f l (P C) and f l (P D) yields a vector v A key benefit of this technique is the flexibility allowed in choosing the set P C. Rather than being tied to the class labels in the training data, an analyst can-after training-select sets that correspond to any concept of interest. We now describe two ways that this technique be used to interpret a feedforward neural network trained on an image classification task. First, we show relative importance testing between M number of concepts, P C i, where i ∈ M. In this case, each v l C is learned by treating all P C i, i = j as N. Second, we can also test one concept, P C, against a random concept, where N are set of random pictures. In the next section we discuss the of experiments with these methods. The real value of CAVs comes from the fact that they may be used to test the relative importance of concepts. With CAVs, we can formulate generating explanation as a task of performing two-tailed statistical significance test (in particular, z-test). Given samples of class images (e.g., zebra pictures) and two concept vectors A and B, we perform two-tailed z-testing to invalidate the null hypothesis that there is no difference in importance of concepts A and B for the class. We perform this testing for each pair of concepts. For example, an analyst might ask about a photo of a zebra, "was the presence of stripes in the image relevant to the model's classification of the image as a zebra?" In some cases, with tedious effort in a photo retouching program, it might be possible answer questions like this directly. However, CAVs provide a faster and more general technique. Consider an input x and concept C of interest. At inference time, this will give rise to activations f l (x) at layer l. We can then modify this vector of activations in various ways using the concept vector v l C. For example, we might create a new vector w − = f l (x)−v l C, which might be interpreted as a counterfactual of "x with less of concept C". Performing inference starting at layer l + 1 based on w − will give a new classification y w. We can also create a new vector w + = f l (x) + v l C, which might be interpreted as a counterfactual of "x with more of concept C".Thus to test the relevance of stripes to the classification of a zebra, we would either add or subtract a'stripe' concept activation vector from the zebra image embedding, run forward propagation the ing embedding and examine the output of the network. Large changes in the probability of zebra would indicate that stripes played a key role in the classification. Simply adding the vector is a bit ad-hoc, especially when the norm of the vector may depend on how exactly the CAV was trained. However, we found this naive method to empirically yield that were consistently semantically meaningful. We also note that this addition is loosely related to directional derivative -saliency maps take the derivative of the logits with respect to each pixel, while our work takes derivatives with respect to a concept direction. Future work should explore more principled methods, to measure relevance of concepts to classification. Quantitatively, the metric of the influence of the concept C to class k can be measured by DISPLAYFORM0 where p k (y) represents the probability of class k of prediction y and i = {1, 2, . . ., N} where N is the number of images of the class for inspection (e.g., zebra). Intuitively speaking, Iup/down w measures the ratio of data points that become'more/less like class k' after the modification with concept vector v l C. We can perform statistical significance testing in order to quantify the concept importance; we can test a hypothesis that one concept is more/less important than another concept. In other words, the null hypothesis that no color is significant. To test this hypothesis we can do z-testing on the measured importance values, and ask the question what is the probability that random concept vectors would observe the measured difference. As a simple demonstration of the value of CAVs, we describe a technique for localizing where a concept is disentangled in a network. A common view of feedforward networks is that different concepts are represented at different layers. For example, many image classification networks are said to detect textures at low levels, with increasingly complex concepts recognized at later layers. Typically evidence for this view comes from various methods of reverse-engineering (3; 30; 17) the behavior of individual neurons in the network. Using concept activation vectors, however, we can approach this question from the opposite direction. We can pick out a particular concept C in advance, and consider the concept vectors v 2; 3; 19 ). In the next section we describe the of an experiment based on this idea. In this section, we first show evidence that the learned CAV indeed detect the concepts of interest. Then we show the for hypothesis testing using these CAVs, and the insights one can gain from it. All our experiments are based on the model from using the publicly available model parameters from. The pictures used to learn the CAVs are collected from the latest ImageNet Fall 2011 release BID19. Each concept vector is learned using between 30-500 pictures as input. We intentionally chose not to use all the available pictures in order to mirror realistic scenarios where users may not be able to collect a large amount of data. The class'arms' only had 30 pictures (all manually collected), since ImageNet does not include this as a part of the dataset. The pictures used to learn texture concepts are taken from the data set. In order to learn concept vectors for colors, we generated 500 pictures synthetically by generating the color channels randomly. We provide experiments both where CAV's are trained to distinguish between a set of concepts (e.g. red, yellow, green, blue) and when one CAV is trained per concept (with N chosen to be a random collection of images). In this section we describe experiments that indicate our linearly learned CAVsalign with their intended concepts. First, the linear maps used to construct the CAVsare accurate in predicting the concepts. The point in the network where these concepts in the networks are learned (i.e., when accuracy becomes high) is consistent with general knowledge about the representations learned by neural networks. Low level features such as edges and textures are detected in the early layers and higher level concepts are only reliably detected in the upper layers. Next, we use activation maximization techniques BID16 to visualize each of the CAVs and observe that the patterns are consistent with the provided concept. Finally, we show the top k images that are most similar in terms of cosine similarity to each CAV for further qualitative confirmation. Figure 1 shows the accuracy of the linear classifiers at each layer in the network for each type of CAV. Each classifier is trained to distinguish one concept from other concepts (e.g., textures1 set contains 'stripes', 'zigzagged' and 'dotted' texture concepts). Overall, we observe high accuracy as measured by a held out test set of size 1/3 that of the training size. This is evidence that the given concepts are linearly separable in many layers of the network. Note that the accuracy of more abstract CAV (e.g., objects) increases in higher layers of the network. The accuracy of a simpler CAV, such as color, is high throughout the entire network. This agrees with prior work on visualizing the learned representations at different layers of neural networks BID28.This experiment does not yet show that these CAVs align with the concepts that makes sense semantically to humans. We demonstrate this with the next set of experiments. In this section, we use the activation maximization technique BID15 to visualize the learned representations in the direction of the CAV. We use the same techniques from BID16. As typically done, we use a random image as a starting point BID15 for the optimization to avoid choosing an arbitrary image as a starting point. Figure 2 shows highly recognizable features, such as knitted textures, and corgi faces. Figure 3: Deepdreamed CAV texture concepts for each layer (each row) in inception. We can identify textures better in the mid-layer (mixed4d), then later layers. We can also observe how each concept is represented as the depth increases. Images in Fig. 3 show the for sets of textures. Interestingly, there is a layer (around mixed 4d) where textures are clearly identifiable, after which they become less and less recognizable. The left images in FIG3 show the for set of colors -green, blue, yellow and red (each column) and for the set of layers (lower to higher layers from top to bottom). The training set for the color CAVs are generated by simply replacing each RGB channel with randomly sampled values around 0.5, while leaving other channels empty. Note how the color CAVs also contain many random textures, this is perhaps not surprising -there is no reason to expect any direction in a layer to be associated purely with one concept while being orthogonal to all other concepts. The right image in FIG3 shows higher level concepts. The columns represent zebra, siberian huskies, and corgis. It is interesting to note how the zebra CAVs include textures suggestive of water, trees, and grass. This suggests that the model associates all of these concepts with the classi- fication of zebra and is likely a learned bias that from the of zebra pictures in the training data. These visualizations provide some qualitative confirmation that the learned directions align with the given concepts. This also shows that DeepDream may be combined with TCAV in order to identify and visualize interesting directions in a layer. Prior work on DeepDream has typically chosen neurons or linear combinations of neurons at random to visualize. The next section provides further evidence that the CAVs are indeed aligned with the concept of interest using real data. In order to qualitatively confirm that the learned CAV aligns with meaningful concepts, we compute cosine similarity between a set of pictures (all from one class) to the CAV. FIG4 shows that the top corgi images that aligns with striped CAV selects pictures with striped objects (e.g., a striped tube or vertical blinds in the ). Thus being similar to the striped CAV meant being highly similar to one of the other concepts in the CAV training set. We also show that CAVs for more abstract concepts can be constructed (e.g., CEO). Recall that many of these learned concepts are not classes that the NN learned to classify. We also observe that if there is not much relation between pictures and the concept, the cosine similarity is low. This experiment qualitatively supports our argument that the alignment between CAVs and the meaningful concepts. Note that the images least similar to striped appear with a checkered or knitted pattern, likely due to the fact that the striped CAV was trained relative to checkered and knitted images. When using TCAV, in order to ensure that the provided examples were sufficient to learn CAV, we recommend users to perform this qualitative test by checking cosine similarity with each concepts to the class. In this section, we describe how concept activation vectors may be used for quantitative testing of the relative importance of concepts. In FIG5 we show I up w + for eight classes. We do not list I up w −, as they always appear to be the inverse of I up w +. This means that when CAVs are added, the probability of the class rarely stays the same. This confirms that adding or subtracting v l C clearly impacts the classification task. In particular the v l C direction is aligned with the direction that measures the probability of the target class. Note that adding random directions should be expected to change the probability of the target class in some way. We find, however, that for several CAV's which are semantically relevant to the class (e.g., red to fire engine) the CAVdirection will consistently increase the probability of class for most images of this class. On the other hand, we show that random directions (e.g., a concept learned from random set of images) tend to be much less consistent. This difference allows us to perform a more rigorous hypothesis test in a following section. This testing identifies relationships between concepts and the target class that agree with human intuition (e.g., fire engines are red, cucumbers are bumpy, zebras are striped, CEOs wear suits). In the next section, we describe insights we gained from these tests. In this section, we show that confirm common-sense intuitions about training images, as a kind of sanity check that TCAV provides valid . We then describe that surprised us, but lead to insights about the dataset that we did not initially have. In FIG5, the yellow color was more influential to cab class than any other colors. For people who have lived in cities where this is the standard color for taxis, this is no shock -and indeed, most ImageNet pictures of cabs are yellow. Similarly, the'women' concept is important to'bikini' class, likely due to the fact that training images have humans wearing the bikini rather than pictures of the clothing. More subtly, we also discovered that the'model women' concept is important to'bikini' This lead us to realize that the most of'bikini' class pictures feature professional models, typically very slim, posing in their bikinis -probably not well representative samples of bikinis in the real world. Note that the network was trained with bikini and cab classes, and the concept'yellow','women' and'model women' are not part of classes used to train this network 2.The graph for'dumbbell' class in FIG5 shows that'arms' concept was more important to predict dumbbell class than other concepts. This finding is consistent with previous qualitative findings from BID15, where they discovered that the DeepDream picture of a dumbbell also showed an arm holding it. TCAV allows for quantitative confirmation of this previously qualitative finding. Moreover, unlike all other concepts in this figure, we only collected 30 pictures of each concept (ImageNet did not have arms as a label). Despite the small number of examples, the TCAV method is able to identify this relationship. The flexibility of the TCAV method makes it easy to explore a network for other surprising associations. For example, we saw that'baby' concept was important to'school bus' class. It turns out that some school bus pictures include young children riding or standing in front of the bus. Another discovery was to find that'red' concept is important for cucumber class. We believe this due to the fact that a number of cucumber pictures include other red vegetables, such as tomato and carrot. We believe that TCAV can be useful in discovering these insights for many applications, including for example, to improve fairness in the model. When constructing the CAV, the choice of negative samples may cause spurious suggestions that a concept is relevant to a class when it actually is not. In other words, the learned CAV may accidentally be aligned with something related to the class, and cause high I up w +. For instance, when we tested unrelated concepts and classes, such as zebra to a set of textures, honeycombed, lace-like and bumpy textures, we found that lace-like concept shows high I up w +. One might argue that lace is vaguely related to a zebra's stripes, but in fact even if we choose both P C and N to be independent random sets of images we still often observe I up w + to be relatively high. This is not surprising: there are directions in a layer l which are highly aligned with the zebra concept, and randomly chosen directions will typically have small but non-zero projection along these directions. One way to filter out these spurious is to do statistical testing against random concepts. Using different sets of random images for the negative class and striped images for P C, we make 50-100 CAVs, all of which represent concept'striped'. For each'striped' CAV we can measure I up w +. We can also generate a set of'random' CAVs by choosing random images for both P C and N. Then we performed a z-test to see if the mean I up w + s from striped CAVs are statistically different from the mean I up w + of random CAVs. We can successfully filter out some spurious correlations, including lace-like concepts with zebras by using this method, see the histogram 4.4. We have introduced the notion of a "concept activation vector," or CAV, which is a flexible way to probe the internal representation of a concept in a classification network. Since CAVs may be defined via a set of example inputs, rather than custom coding, they are well suited to use by non-experts. We then described a technique (Testing with CAVs, or TCAV) for quantifying the relation between a CAV and a particular class. The TCAV technique allows us to provide quantitative answers to questions such as, "How important are the stripes to the classification of a zebra?"To provide evidence for the value of the TCAV technique, we described a series of experiments which supported common-sense intuition, for example, that stripes are indeed important to the identification of zebras. In addition, we used the DeepDream technique to create images whose internal representations approximate certain CAVs. The ing pictures were strongly evocative of the original concepts. Finally, we described how the TCAV technique may be used to find associations between concepts, both obvious ("yellow" and "taxi") and non-obvious ("red" and "cucumber").In addition to analyzing a single network, TCAV can be also used to compare and contrast a pair of networks. For example, one can compare the relative importance of concepts to determine how the different choices of training process or architecture influences learning of each concept. Based on the , users can perform model selection based on the concepts that are more or less important for the task. An interesting direction for future work may be to explore applications of using CAVs to adjust the of a network during inference time. Adding a scalar multiple of a CAV to the activations of an intermediate layer can, as shown in our experiments, allow us to deemphasize or enhance conceptual aspects of an input. One potential application, for example, might be to reduce bias the network has learned from training data. | This work aims to provide quantitative answers to the relative importance of concepts of interest via concept activation vectors (CAV). In particular, this framework enables non-machine learning experts to express concepts of interest and test hypotheses using examples (e.g., a set of pictures that illustrate the concept). We show that CAV can be learned given a relatively small set of examples. Hypothesis testing with CAV can answer whether a particular concept (e.g., gender) is more important in predicting a given class (e.g., doctor) than other sets of concepts. Interpreting networks with CAV does not require any retraining or modification of the network. | 1,222 | scitldr |
We present a new family of objective functions, which we term the Conditional Entropy Bottleneck (CEB). These objectives are motivated by the Minimum Necessary Information (MNI) criterion. We demonstrate the application of CEB to classification tasks. We show that CEB gives: well-calibrated predictions; strong detection of challenging out-of-distribution examples and powerful whitebox adversarial examples; and substantial robustness to those adversaries. Finally, we report that CEB fails to learn from information-free datasets, providing a possible resolution to the problem of generalization observed in. The field of Machine Learning has suffered from the following well-known problems in recent years 1:• Vulnerability to adversarial examples. Essentially all machine-learned systems are currently believed by default to be highly vulnerable to adversarial examples. Many defenses have been proposed, but very few have demonstrated robustness against a powerful, general-purpose adversary. Lacking a clear theoretical framework for adversarial attacks, most proposed defenses are ad-hoc and fail in the presence of a concerted attacker BID8 BID5 ).• Poor out-of-distribution detection. Classifiers do a poor job of signaling that they have received data that is substantially different from the data they were trained on. Ideally, a trained classifier would give less confident predictions for data that was far from the training distribution (as well as for adversarial examples). Barring that, there would be a clear, principled statistic that could be extracted from the model to tell whether the model should have made a low-confidence prediction. Many different approaches to providing such a statistic have been proposed BID18 BID28 BID19 BID32 BID30 BID13, but most seem to do poorly on what humans intuitively view as obviously different data.• Miscalibrated predictions. Related to the issues above, classifiers tend to be very overconfident in their predictions BID18. This may be a symptom, rather than a cause, but miscalibration does not give practitioners confidence in their models.• Overfitting to the training data. BID48 demonstrated that classifiers can memorize fixed random labelings of training data, which means that it is possible to learn a classifier with perfect inability to generalize. This critical observation makes it clear that a fundamental test of generalization is that the model should fail to learn when given what we call information-free datasets. I Consider a joint distribution, p(x, y), represented by the graphical model: DISPLAYFORM0 This joint distribution is our data, and may take any form. We don't presume to know how the data factors. It may factor as p(x, y) = p(x)p(y|x), p(x, y) = p(y)p(x|y), or even p(x, y) = p(x)p(y).The first two factorings are depicted in FIG0 in a standard information diagram showing the various entropies and the mutual information. We can ask: given this generic setting, what is the optimal representation? It seems there are only two options: capture all of the information in both X and Y (measured by the joint entropy, H(X, Y)), or capture only the information shared between X and Y (measured by the mutual information, I(X; Y)).The field of lossless compression is concerned with representations that perfectly maintain all of the information in both X and Y, as are the closely related studies of Kolmogorov Complexity BID25 and Minimum Description Length (MDL) BID17, all three of which are concerned with perfect reconstruction of inputs or messages. In contrast, we think that the field of machine learning is primarily concerned with making optimal predictions on unseen data. The requirements of perfect reconstruction from a compressed representation may in the retention of much more information in the model than may be needed for prediction or stochastic generation tasks. For most such machine learning tasks, this points towards learning representations that capture only the information shared between X and Y, which is measured by the mutual information, I(X; Y).The mutual information is defined in a variety of ways; we will use two (Cover & BID12 : DISPLAYFORM1 I(X; Y) measures the amount of information necessary to define the relationship between X and Y. For some fixed dataset X, Y, any information less than I(X; Y) must be insufficient to predict Y from X or vice-versa with minimal error. Equivalently, any information more than I(X; Y) must contain some superfluous information for those two tasks. For example, consider a labeled dataset, where X is high-dimensional and information-rich, and Y is a single integer. All of the information in X that is not needed to correctly predict the single value Y = y is useless for the prediction task defined by the dataset, and may be harmful to the performance of a machine learning system if retained in the learned representation, as we will show empirically below. Next, we formalize this intuition about the information required for an optimal representation. We propose the Minimum Necessary Information (MNI) criterion for a learned representation. We can define MNI in three parts. First is Information: we would like a representation that captures semantically meaningful information. In order to measure how successfully we capture meaningful information, we must first know how to measure information. Thus, the criterion prefers informationtheoretic approaches, given the uniqueness of entropy as a measure of information BID36. The semantic value of information is given by a task, which is specified by the set of variables in the dataset. I.e., the dataset X, Y defines two tasks: predict Y given X, or predict X given Y. This brings us to Necessity: the information we capture in our representations must be necessary to solve the task. 2 Finally, Minimality: this simply refers to the amount of information -given that we learn a representation that can solve the task, we require that the representation we learn retain the smallest amount of information about the task out of the set of all representations that solve the task. This part of the criterion restricts us from incorporating "non-semantic" information into our representation, such as noise or spurious correlation. More formally, in the case of two observed variables, X and Y, a necessary set of conditions for a representation Z to satisfy the MNI criterion is the following: DISPLAYFORM0 This fully constrains the amount of information. To constrain the necessity of the information in the representation Z, the following conditions must be satisfied: DISPLAYFORM1 These four distributions of z correspond to the two tasks: predict Y given X and predict X given Y. One way to satisfy Equation FORMULA2 is to learn a representation Z X of X only, indicated by the Markov chain Z X ← X ↔ Y. We show this Markov chain as an information diagram in FIG0 (Right). The placement of H(Z X) in that diagram carefully maintains the conditional independence between Y and Z X given X, but is otherwise fully general. Some of the entropy of Z X is unassociated with any other variable; some is only associated with X, and some is associated with X and Y together. FIG0 (Right), then, shows diagrammatically the state of the learned representation early in training. At the end of training, we would like Z X to satisfy the equalities in Equation, which corresponds to FIG0 (Left), where the gray region labeled I(X; Y) also corresponds to I(X; Z X) and I(Y; Z X).Given the conditional independence Z X Y|X in our Markov chain, I(Y; Z X) is maximal at I(X; Y), by the data processing inequality. However, I(X; Z X) does not clearly have a constraint that targets I(X; Y). We cannot maximize I(X; Z X) in general while being compatible with the MNI criterion, as that is only constrained from above by H(X) ≥ I(X; Y). Instead, we could use the Information Bottleneck objective BID40 which starts from the same Markov chain and minimizes βI(X; Z X) − I(Y; Z X), but it is not immediately clear what value of β will achieve the MNI.Thus, we need a different approach to hit the MNI. Considering the information diagram in FIG0 (Left), we can notice the following identities when when we have achieved the MNI: DISPLAYFORM0 With our Markov chain and the chain rule of mutual information (Cover & BID12, we have: DISPLAYFORM1 This conditional information is guaranteed to be non-negative, as both terms are mutual informations, and the Markov chain guarantees that I(Y; Z X) is no larger than I(X; Z X), by the data processing inequality. From an optimization perspective, this is ideal -we have a term that we can minimize, and we can directly know how far we are from the optimal value of 0 (measured in nats, so it is interpretable), when we are done (when it's close enough to 0 that we are satisfied), and when our model is insufficient for the task (i.e., when this term isn't close enough to 0). This leads us to the general Conditional Entropy Bottleneck objective: DISPLAYFORM2 Typically we would add a Lagrange multiplier on one of the two terms. In Appendix A, we present some geometric arguments to prefer leaving the two terms balanced. It is straightforward to turn this into a variational objective function that we can minimize. Taking the terms in turn: DISPLAYFORM3 e(z X |x) is our encoder. It is not a variational approximation, even though it has learned parameters. b(z X |y) is the backward encoder, a variational approximation of p(z X |y).In the second term, H(Y) can be dropped because it is constant with respect to the model: DISPLAYFORM4 c(y|z x) is the classifier (although that name is arbitrary, given that Y may not be labels), which variationally approximates p(y|z X).The variational bounds derived above give us a fully tractable objective function that works on large-scale problems and supports amortized inference, Variational Conditional Entropy Bottleneck (VCEB): DISPLAYFORM5 The distributions with letters other than p are assumed to have learned parameters, which we otherwise omit in the notation. In other words, all three of e(·), b(·), and c(·) have learned parameters, just as in the encoder and decoder of a normal VAE BID24, or the encoder, classifier, and marginal in a VIB model. We will name the I(X; Z X |Y) term the Residual Information -this is the excess information in our representation beyond the information shared between X and Y: DISPLAYFORM6 There are a number of natural variations on this objective. We describe a few of them in Appendix E. The Information Bottleneck (IB) BID40 learns a representation of X and Y subject to a soft information constraint: DISPLAYFORM0 where β controls the size of the constraint. In Figure 2 we show the optimal surfaces for CEB and IB, labeling the MNI point on both. In Figure 4 we show the same surfaces for finite models and that adjusting β determines a unique point in these information planes relative to I(X; Y).As described in BID40, IB is a tabular method, so it is not usable for amortized inference.5 Two recent works have extended IB for amortized inference. Both of these approaches 4 We write expectations log e(z X |x). They are always with respect to the joint distribution; here, that is p(x, y, z X) = p(x, y)e(z X |x).5 The tabular optimization procedure used for IB trivially applies to CEB, just by setting β = 1 2. A recent work on IB using tabular methods is the Deterministic Information Bottleneck BID38, which learns hard clusterings, rather than the soft clusterings of earlier IB approaches. Figure 2: Geometry of the optimal surfaces for IB and CEB, with all points labeled. CEB rectifies IB's parallelogram by subtracting I(Y; Z) at every point.rely on sweeping β, and do not propose a way to set β directly to train models where I(X; Z) = I(Y; Z) = I(X; Y). BID0 presents InfoDropout, which uses IB to motivate a variation on Dropout BID37. A varational version of IB is presented in BID2. That objective is the Variational Information Bottleneck (VIB): DISPLAYFORM0 Instead of the backward encoder, VIB has a marginal posterior, m(z X), which is a variational approximation to e(z X) = dx p(x)e(z X |x). Additionally, it has a hyperparameter, β. We show in Appendix A that the optimal value for β = 1 2 when attempting to adhere to the MNI criterion. Following, we define the Rate (R): DISPLAYFORM1 We can compare variational CEB with VIB by taking their difference at β = 1 2. Note that both objectives have an elided dependence on log p(y) from the I(Y; Z X) term that we must track: DISPLAYFORM2 Solving for m(z X) when that difference is 0: DISPLAYFORM3 Since the optimal m * (z X) is the marginalization of e(z X |x), at convergence we must have: DISPLAYFORM4 Depending on the distributional families and the parameterizations, this point may be difficult to find, particularly given that m(z X) only gets information about y indirectly through e(z X |x). Consequently, for otherwise equivalent models, we may expect V IB 1 2 to converge to a looser approximation of I(X; Z) = I(Y; Z) = I(X; Y) than CEB. Since VIB optimizes an upper bound on I(X; Z), that means that V IB 1 2 will report R converging to I(X; Y), but will capture less than the MNI. In contrast, if Re X/Y converges to 0, the variational tightness of b(z X |y) to the optimal p(z X |y) depends only on the tightness of c(y|z X) to the optimal p(y|z X). 6 MNI Optimality of CEB In this work we do not attempt to give a formal proof that CEB representations learn the optimal information about the observed data (and certainly the variational form of the objective will prevent that from happening in general cases). However, CEB's targeting of the MNI is motivated by the following simple observations: If I(X; Z) < I(X; Y), then we have thrown out relevant information in X for predicting Y. If I(X; Z) > I(X; Y), then we are including information in X that is not useful for predicting Y. Thus I(X; Z) = I(X; Y) is the "correct" amount of information, which is one of the equalities required in order to satisfy the MNI criterion. Only models that successfully learn that amount of information can possibly be MNI-optimal. The second condition of MNI (Equation FORMULA3) is only fully satisfied when optimizing the bidirectional CEB objective, described in Appendix E.2, as log e(z X |x) − log b(z X |y) and log b(z Y |y) − log e(z Y |x) are both 0 only when b(z|y) = p(z|y) and e(z|x) = p(z|x) and the corresponding decoder terms are both maximal. We leave such models for future work. Our primary experiments are focused on comparing the performance of otherwise identical models when we change only the objective function. Consequently, we aren't interested in demonstrating state-of-the-art for a particular classification task. Instead, we are interested in relative differences in performance that can be directly attributed to the difference in objective. With that in mind, we present for classification of Fashion MNIST BID46 for five different models. The five models are: a deterministic model (Determ); three VIB models, with β ∈ {1 2, 10 −1, 10 −2} (VIB 0.5, VIB 0.1, VIB 0.01); and a CEB model. These same models are used in the calibration, out-of-distribution, and adversarial experiments (Sections 8 to 10). Critically, all five models share the same inference architecture mapping X to Y. See Appendices C and D for details on training and the architectures. Since Fashion MNIST doesn't have a prespecified validation set, it offers an opportunity to test training algorithms that only look at training , rather than relying on cross validation. To that end, the five models presented here are the first models with these hyperparameters that we trained on Fashion MNIST. 6 The learning rate for the CEB model was lowered according to the training algorithm described in Appendix C. The other four models followed the same algorithm, but instead of tracking Re X/Y, they simply tracked their training loss. All five models were required to retain the initial learning rate of 0.001 for 40 epochs before they could begin lowering the learning rate. At no point during training did any of the models exhibit non-monotonic test accuracy, so we do not believe that this approach harmed any performance -all five models converged essentially smoothly to their final, reported performance. In spite of the dynamic learning rate schedule, all five models took approximately the same number of epochs to reach the minimum learning rate. Underconfidence occurs when the points are above the diagonal. Overconfidence occurs when the points are below the diagonal. In the case of a simple classification problem with a uniform distribution over classes in the training set, we can directly compute I(X; Y) as log C, where C is the number of classes. 7 See TAB0 for a comparison of the rates between the four variational models, as well as their accuracies. All but VIB 0.5 achieve the same accuracy. All four stochastic models get close to the ideal rate of 2.3 nats, but they get there by different paths. For the VIB models, the lower β is, the higher the rate goes early in training, before converging down to (close to) 2.3 nats. CEB never goes above 2.3 nats. In FIG1, we show calibration plots at various points during training for the four models. Calibration curves help analyze whether models are underconfident or overconfident. Each point in the plots corresponds to a 5% confidence range. Accuracy is averaged for each bin. A well-calibrated model is correct half of the time it gives a confidence of 50% for its prediction. All of the networks move from under-to overconfidence during training. However, CEB and VIB 0.5 are only barely overconfident, while β = 0.1 is sufficent to make it nearly as overconfident as the deterministic model. This overconfidence is one of the issues that is correlated with exceeding the MNI during training TAB0. See Appendix A for a geometric explanation for how this can occur. We test the ability of the five models to detect three different out-of-distribution (OoD) detection datasets. U is uniform noise in the image domain. MNIST uses the MNIST test set. Vertical Flip is the most challenging, using vertically flipped Fashion MNIST test images, as originally proposed in.We use three different metrics for thresholding. The first two, H and R, were proposed in. H is the classifier entropy. R is the rate, defined in Section 5. The third metric is specific to CEB: Re X/Ŷ. This is the predicted residual information -since we don't have access to the true value of Y at test time, we useŷ ∼ c(y|z X) to calculate H(Z X |Ŷ). This is no longer a valid bound on Re X/Y, asŷ may not be from the true distribution p(x, y, z X). However, the better the classifier, the closer the estimate should be. These three threshold scores are used with the standard suite of proper scoring rules: False Positive Rate at 95% True Positive Rate (FPR 95% TPR), Area Under the ROC Curve (AUROC), and Area Under the Precision-Recall Curve (AUPR). See BID31 for definitions. The core is that VIB 0.5 performs much less well at the OoD tasks than the other two VIB models and CEB. We believe that this is another of VIB 0.5 learning the right amount of information, but not learning all of the right information, thereby demonstrating that it is not a valid MNI objective, as explored in Appendix A. On the other hand, the other two VIB objectives seem to perform extremely well, which is the benefit they get from capturing a bit more information about the training set. We will see below that there is a price for that information, however. Adversarial examples were first noted in BID39. The first practical attack, Fast Gradient Method (FGM) was introduced shortly after BID15. Since then, many new attacks have been proposed. Most relevant to us is the Carlini-Wagner (CW) attack BID9, which was the first practical attack to directly use a blackbox optimizer to find minimal perturbations. 8 Many defenses have also been proposed, but almost all of them are broken BID8 BID5. This work may be seen as a natural continuation of the adversarial analysis of BID2, which showed that VIB naturally had robustness to whitebox adversaries, including CW. In that work, the authors did not train any VIB models with a learned m(z X), which in much weaker models, as shown in. We believe this is the first work that trains a VIB model with a learned marginal and using it in an adversarial setting. BID9. CW, (C = 1) is CW with an additional confidence penalty set to 1. CW, (C = 1) Det. is a custom CW attack targeting CEB's detection mechanism, Re X/Ŷ. L 0, L 1, L 2, L ∞ report the corresponding norm (mean ±1 std.) of successful adversarial perturbations. Higher norms on CW indicate that the attack had a harder time finding adversarial perturbations, since it starts by looking for the smallest possible perturbation. The remaining columns are as in TAB1. Arrows denote whether higher or lower scores are better. Bold indicates the best score in that column for a particular adversarial attack. We consider CW in the whitebox setting to be the current gold standard attack, even though it is more expensive than FGM or the various iterative attacks like DeepFool BID33 or iterative variants of FGM BID27. Running an optimizer directly on the model to find the perturbation that can fool that model tells us much more about the robustness of the model than approaches that focus on attack efficiency. CW searches over the space of perturbation magnitudes, which makes the attack hard to defend against, and thus a strong option for testing robustness. DISPLAYFORM0 Here, we explore three variants of the CW L 2 targeted attack. The implementation the first two CW attacks are from BID35. CW and CW (C = 1) are the baseline CW attack, and CW with a confidence adjustment of 1. Note that in order for these attacks to succeed at all on CEB, we had to increase the default CW learning rate to 5 × 10 −1. Without that increase, CW found almost no adversaries in our early experiments. All other parameters are left at their defaults for CW, apart from setting the clip ranges to. The final attack, CW (C = 1) Det. is a modified version of CW (C = 1) that additionally incorporates a detection tensor into the loss that CW minimizes. For CEB, we had it target minimizing Re X/Ŷ in order to break the network's ability to detect the attack. All of the attacks are targeting the trouser class of Fashion MNIST, as that is the most distinctive class. Targeting a less distinctive class, such as one of the shirt classes, would confuse the difficulty of classifying the different shirts and the robustness of the model to adversaries. We run each of the first three attacks on the entire Fashion MNIST test set (all 10,000 images). For the stochastic networks, we permit 32 encoder samples and take the mean classification (the same number of samples is also used for gradient generation in the attacks to be fair to the attacker). CW is expensive, but we are able to run these on a single GPU in about 30 minutes. However, CW (C = 1) Det. ends up being about 200 times more expensive -we were only able to run 1000 images and only 8 encoder samples, and it took 2 1 2 hours. Consequently, we only run CW (C = 1) Det. on the CEB model. Our metric for robustness is the following: we count the number of adversarial examples that change a correct prediction to an incorrect prediction of the target class, and divide by the number of correct predictions the model makes on the non-adversarial inputs. We additionally measure the size of the ing perturbations using the L 0, L 1, L 2, and L ∞ norms. For CW, a larger perturbation generally indicates that the attack had to work harder to find an adversarial example, making this a secondary indication of robustness. Finally, we measure adversarial detection using the same thresholding techniques from TAB1.The of these experiments are in TAB2. We show all 20,000 images for four of the models in FIG6. The most striking pattern in the models is how well VIB 0.01 and VIB 0.1 do at detection, while VIB 0.5 is dramatically more robust. We think that this is the most compelling indication of the importance of not overshooting I(X; Y) -even minor amounts of overshooting appear to destroy the robustness of the model. On the other hand, VIB 0.5 has a hard time with detection, which indicates that, while it has learned a highly compressed representation, it has not learned the optimal set of bits. Thus, as we discuss in Appendix A, VIB trades off between learning the necessary information, which allows it to detect attacks perfectly, and learning the minimum information, which allows it to be robust to attacks. The CEB model permits both -it maintains the necessary information for detecting powerful whitebox attacks, but also retains the minimum information, providing robustness. This is again visible in the CW (C = 1) Det. attack, which directly targets CEB's detection mechanism. Even though it no longer does well detecting the attack, the model becomes more robust to the attack, as indicated both by the much lower attack success rate and the much larger perturbation magnitudes. We replicate the basic experiment from BID48: we use the images from Fashion MNIST, but replace the training labels with fixed random labels. This dataset is information-free in the sense that I(X; Y) = 0. We use that dataset to train multiple deterministic models, CEB models, and a range of VIB models. We find that the CEB model never learns (even after 100 epochs of training), the deterministic model always learns (after about 40 epochs of training it begins to memorize the random labels), and the VIB models only learn with β ≤ 0.001.The fact that CEB and VIB with β near 1 2 manage to resist memorizing random labels is our final empirical demonstration that MNI is a powerful criterion for objective functions. We have presented the basic form of the Conditional Entropy Bottleneck (CEB), motivated by the Minimum Necessary Information (MNI) criterion for optimal representations. We have shown through careful experimentation that simply by switching to CEB, you can expect substantial improvements in OoD detection, adversarial example detection and robustness, calibration, and generalization. Additionally, we have shown that it is possible to get all of these advantages without using any additional form of regularization, and without any new hyperparameters. We have argued empirically that objective hyperparameters can lead to hard-to-predict suboptimal behavior, such as memorizing random labels, or reducing robustness to adversarial examples. In Appendix E and in future work, we will show how to generalize CEB beyond the simple case of two observed variables. It is our perspective that all of the issues explored here -miscalibration, failure at OoD tasks, vulnerability to adversarial examples, and dataset memorization -stem from the same underlying issue, which is retaining too much information about the training data in the learned representation. We believe that the MNI criterion and CEB show a path forward for many tasks in machine learning, permitting fast, amortized inference while ameliorating major problems. a b Figure 4: Geometry of the optimal surfaces for both CEB (purple) and IB (green) for models that can only come within of the optimal surface (a: = 0.1I(X; Y); b: = 0.01I(X; Y)). The tangent lines have the slope of the corresponding β -the tangent point on the ball corresponds to the point on the pareto-optimal frontier for the corresponding model. Note that β determines the "exchange rate" between bits of I(X; Z) and I(Y; Z), which is how we determine the coordinate of the center of the ball. For IB to achieve the MNI point, 2 bits of I(Y; Z) are needed for every bit of I(X; Z). Consequently, even for an infitely powerful model (corresponding to = 0), the only value of β that hits the MNI point is β = 2. Thus, knowing the function (β) for a given model and dataset completely determines the model's pareto-optimal frontier. Here we collect a number of that are not critical to the core of the paper, but may be of interest to particular audiences. A Analysis of CEB and IB From Equation FORMULA5 and the definition of CEB in Equation, the following equivalence between CEB and IB is obvious: DISPLAYFORM0 where we are parameterizing IB with β on the I(Y; Z) term for convenience. This equivalence generalizes as follows: DISPLAYFORM1 DISPLAYFORM2 In Figure 4, we show the combined information planes for CEB and IB given the above parameterization. The figures show the simple geometry that determines a point on the pareto-optimal frontier for both objectives. Every such point is fully determined by the function (β) for a given model and dataset, where is the closest the model can approach the true optimal surface. (β) = 0 corresponds to the "infinite" model family that exactly traces out the boundaries of the feasible region. The full feasible regions can be seen in Figure 2.From this geometry we can immediately conclude that if an IB model and a CEB model have the same value of > 0 at equivalent β, the CEB model will always yield a value of I(Y; Z) closer to I(X; Y). This is because the slope of the tangent lines for CEB are always lower, putting the tangent points higher on the ball. This gives part of a theoretical justification for the empirical observations above that V IB 0.5 (equivalent to IB 2 in the parameterization we are describing here) fails to capture as much of the necessary information as the CEB model. Even at the pareto-optimal frontier, V IB 0.5 cannot get I(Y; Z) as close to I(X; Y) as CEB can. Of course, we do not want to claim that this effect accounts for the fairly substantial difference in performance -that is likely to be due to a combination of other factors, including the fact that it is often easier to train continuous conditional distributions (like b(z|y)) than it is to train continuous marginal distributions (like m(z)).We also think that this analysis of the geometry of IB and CEB supports our preference for targeting the MNI point and treating CEB as an objective without hyperparameters. First, there are only a maximum of 4 points of interest in both the IB and CEB information planes (all 4 are visibile in Figure 2): the origin, where there is no information in the representation; the MNI point; the point at (I(Y; Z) = I(X; Y), I(X; Z) = H(X)) (which is an MDL-compatible representation BID17); and the point at (I(Y; Z) = 0, I(X; Z) = H(X|Y)) (which would be the optimal decoder for an MNI representation). These are the only points naturally identified by the dataset -selecting a point on one of the edges between those four points seems to need additional justification. Second, if you do agree with the MNI criterion, for a given model it is impossible to get any closer to the MNI point than by setting CEB's β = 1, due to the convexity of the pareto-optimal frontier. Much more useful is making changes to the model, architecture, dataset, etc in order to make smaller. One possibility in that direction that IB and CEB models offer is inspecting training examples with high rate or residual information to check for label noise, leading to a natural human-in-the-loop model improvement algorithm. Another is using CEB's residual information as a measure of the quality of the trained model, as mentioned in Appendix C. In this case, the feasible region for CEB collapses to the line segment I(X; Z|Y) = 0 with 0 ≤ I(Y; Z) ≤ I(X; Y). Similarly, the corresponding IB feasible region is the diagonal line I(X; Z) = I(Y; Z). This case happens if we choose as our task to predict images given labels, for example. We should expect such label-conditional generative models to be particularly easy to train, since the search space is so simple. Additionally, it is never possible to learn a representation that exceeds the MNI, I(X; Z) ≤ H(X) = I(X; Y). As an objective function, CEB is independent of the methods used to optimize it. Here we focus on variational objectives because they are simple, tractable, and well-understood, but any approach to optimize mutual information terms can work, so long as they respect the side of the bounds required by the objective. There are many approaches in the literature that attempt to optimize mutual information terms in some form, including BID26 BID10 BID21 BID20 BID34. It is worth noting that none of those approaches by themselves are compatible with the MNI criterion. Some of them explicitly maximize I(X; Z X), while others maximize I(Y; Z X), but leave I(X; Z X) unconstrained. We expect all of these approaches to capture more than the MNI in general. Because of the properties of Re X/Y, we can consider training algorithms that don't rely on observing validation set performance in order to decide when to lower the learning rate. The closer we can get Re X/Y to 0 on the training set, the better we expect to generalize to data drawn from the same distribution. One simple approach to training is to set a high initial learning rate (possibly with reverse annealing of the learning rate BID16), and then lower the learning rate after any epoch of training that doesn't in a new lowest mean residual information on the training data. This is equivalent to the logic of dev-decay training algorithm of BID45, but does not require the use of a validation set. Additionally, since the training set is typically much larger than a validation set would be, the average loss over the epoch is much more stable, so the learning rate is less likely to be lowered spuriously. The intuition for this algorithm is that Re X/Y directly measures how far from optimal our learned representation is for a given c(y|z X). At the end of training Re X/Y indicates that we could improve performance by increasing the capacity of our architecture or Algorithm 1: Training algorithm that lowers the learning rate when the mean Re X/Y of the previous epoch is not less than the lowest Re * X/Y seen so far. The same idea can be applied to training VIB and deterministic models by tracking that the training loss is always going down. For the experiments in Section 7, we set the values specified in the Input section. −3, min_learning_rate=10 −6, lowering_scale=1 − All of the models in our experiments have the same core architecture: A 7×2 Wide Resnet BID47 for the encoder, with a final layer of D = 4 dimensions for the latent representation, followed by a two layer MLP classifier using ELU BID11 activations with a final categorical distribution over the 10 classes. The stochastic models parameterize the mean and variance of a D = 4 fully covariate multivariate Normal distribution with the output of the encoder. Samples from that distribution are passed into the classifier MLP. Apart from that difference, the stochastic models don't differ from Determ during evaluation. None of the five models uses any form of regularization (e.g., L 1, L 2, DropOut BID37, BatchNorm BID22).The VIB models have an additional learned marginal, m(z X), which is a mixture of 240 D = 4 fully covariate multivariate Normal distributions. The CEB model instead has the backward encoder, b(z X |y) which is a D = 4 fully covariate multivariate Normal distribution parameterized by a 1 layer MLP mapping the label, Y = y, to the mean and variance. In order to simplify comparisons, for CEB we additionally train a marginal m(z X) identical in form to that used by the VIB models. However, for CEB, m(z X) is trained using a separate optimizer so that it doesn't impact training of the CEB objective in any way. Having m(z X) for both CEB and VIB allows us to compare the rate, R, of each model except Determ. Any distributional family may be used for the encoder. Reparameterizable distributions BID24 BID14 are convenient, but it is also possible to use the score function trick BID44 to get a high-variance estimate of the gradient for distributions that have no explicit or implicit reparameterization. In general, a good choice for b(z|y) is the same distributional family as e(z|x), or a mixture thereof. These are modeling choices that need to be made by the practitioner, as they depend on the dataset. In this work, we chose normal distributions because they are easy to work with and will be the common choice for many problems, particularly when parameterized with neural networks, but that choice is incidental rather than fundamental. Note that we did not use additional regularization on the deterministic model, but all models have a 4 dimensional bottleneck, which is likely to have acted as a strong regularizer for the deterministic model. Additionally, standard forms of regularization, including stochastic regularization, did not prevent the CW attack from being successful 100% of the time in the original work BID9. Nor did regularization cause the deterministic networks in BID48 to avoid memorizing the training set. Thus, we don't think that our deterministic baseline is disadvantaged on the tasks we considered in Sections 7 and 11. It is worth noting that the conditions for infinite mutual information given in BID4 do not apply to either CEB or VIB, as they both use stochastic encoders e(z X |x). In our experiments using continuous representations, we did not encounter mutual information terms that diverged to infinity, although it is possible to make modeling and data choices that make it more likely that there will be numerical instabilities. This is not a flaw specific to CEB or VIB, however, and we found numerical instability to be almost non-existent across a wide variety of modeling and architectural choices for both variational objectives. Here we describe a few of the more obvious variants of the CEB objective. In the above presentation of CEB, we derived the objective for what may be termed "classification" tasks (although there is nothing in the derivation that restricts the form of either X or Y). However, CEB is fully symmetric, so it is natural to consider the second task defined by our choice of dataset, conditional generation of X given Y = y. In this case, we can augment our graphical model with a new variable, Z Y, and derive the same CEB objective for that variable: DISPLAYFORM0 In the same manner as above, we can derive variational bounds on H(Z Y |X) and H(X|Z Y). In particular, we can variationally bound p(z Y |x) with e(z Y |x). Additionally, we can bound p(x|z Y) with a decoder distribution of our choice, d(x|z Y).Because the decoder is maximizing a lower bound on the mutual information between Z Y and X, it can never memorize X. It is directly limited during training to use exactly H(Y) nats of information from Z Y to decode X. For a mean field decoder, this means that the decoder will only output a canonical member of each class. For a powerful decoder, such as an autoregressive decoder, it will learn to select a random member of the class. For discrete Y, this model can trivially be turned into an unconditional generative model by first sampling Y from the training data or using any other appropriate procedure, such as sampling Y uniformly at random. DISPLAYFORM1 Figure 5: Information diagram for the basic hierarchical CEB model, DISPLAYFORM2 Given the presentation of conditional generation above, it is natural to consider that both c(y|z) and d(x|z) are conditional generative models of Y and X, respectively, and to learn a Z that can handle both tasks. This can be done easily with the following bidirectional CEB model: DISPLAYFORM0 This corresponds to the following factorization: p(x, y, z X, z Y) ≡ p(x, y)e(z X |x)b(z Y |y). The two objectives from above then become the following single objective: DISPLAYFORM1 A natural question is how to ensure that Z X and Z Y are consistent with each other. Fortunately, that consistency is trivial to encourage by making the natural variational approximations: p(z Y |x) → e(z Y |x) and p(z X |y) → b(z X |y). The full bidirection variational CEB objective then becomes:min log e(z X |x) − log b(z X |y) − log c(y|z X) DISPLAYFORM2 At convergence, we learn a unified Z that is consistent with both Z X and Z Y, permitting generation of either output given either input in the trained model, in the same spirit as BID42, but without any objective function hyperparameter tuning. Thus far, we have focused on learning a single latent representation (possibly composed of multiple latent variables at the same level). Here, we consider how to learn a hierarchical model with CEB.Consider the graphical model Z 2 ← Z 1 ← X ↔ Y. This is the simplest hierarchical supervised representation learning model. The general form of its information diagram is given in Figure 5.The key observation for generalizing CEB to hierarchical models is that the target mutual information doesn't change. By this, we mean that all of the Z i in the hierarchy should cover I(X; Y) at convergence, which means maximizing I(Y; Z i). It is reasonable to ask why we would want to train such a model, given that the final set of representations are presumably all effectively identical in terms of information content. The answer is simple: doing so allows us to train deep models in a principled manner such that all layers of the network are consistent with each other and with the data. We need to be more careful when considering the residual information terms, though -it is not the case that we want to minimize I(X; Z i |Y), which is not consistent with the graphical model. Instead, we want to minimize I(Z i−1 ; Z i |Y), defining Z 0 = X.This gives the following simple Hierarchical CEB objective: DISPLAYFORM0 DISPLAYFORM1 Because all of the Z i are targetting Y, this objective is as stable as regular CEB. Note that if all of the Z i have the same dimensionality, in principle they may all use the same networks for b(z i |Y) and/or c(y|z i), which may substantially reduce the number of parameters in the model. All of the individual loss terms in the objective must still appear, of course. There is no requirement, however, that the Z i have the same latent dimensionality, although doing so may give a unified hiearchical representation. Many of the richest problems in machine learning vary over time. In BID7, the authors define the Predictive Information: DISPLAYFORM0 This is of course just the mutual information between the past and the future. However, under an assumption of temporal invariance (any time of fixed length is expected to have the same entropy), they are able to characterize the predictive information, and show that it is a subextensive quantity: lim T →∞ I(T)/T → 0, where I(T) is the predictive information over a time window of length 2T (T steps of the past predicting T steps into the future). This concise statement tells us that past observations contain vanishingly small information about the future as the time window increases. The application of CEB to extracting the predictive information is straightforward. Given the Markov chain X <t → X ≥t, we learn a representation Z t that optimally covers I(X <t, X ≥t) in Predictive CEB: DISPLAYFORM1 Note that the model entailed by this objective function does not rely on Z <t when predicting X ≥t. A single Z t captures all of the information in X <t and is to be used to predict as far forward as is desired. "Rolling out" Z t to make predictions is a modeling error according to the predictive information. Also note that, given a dataset of sequences, CEB pred may be extended to a bidirectional model, as in Appendix E.2. In this case, two representations are learned, Z <t and Z ≥t. Both representations are for timestep t, the first representing the observations before t, and the second representing the observations from t onwards. As in the normal bidirectional model, using the same encoder and backwards encoder for both parts of the bidirectional CEB objective ties the two representations together. Modeling and architectural choices. As with all of the variants of CEB, whatever entropy remains in the data after capturing the entropy of the mutual information in the representation must be modeled by the decoder. In this case, a natural modeling choice would be a probalistic RNN with powerful decoders per time-step to be predicted. However, it is worth noting that such a decoder would need to sample at each future step to decode the subsequent step. An alternative, if the prediction horizon is short or the predicted data are small, is to decode the entire sequence from Z t in a single, feed-forward network (possibly as a single autoregression over all outputs in some natural sequence). Given the subextensivity of the predictive information, that may be a reasonable choice in stochastic environments, as the useful prediction window may be small. Multi-scale sequence learning. As in WaveNet BID41, it is natural to consider sequence learning at multiple different temporal scales. Combining an architecture like time-dilated WaveNet with CEB is as simple as combining CEB pred with CEB hier (Appendix E.3). In this case, each of the Z i would represent a wider time dilation conditioned on the aggregate Z i−1. The advantage of such an objective over that used in WaveNet is avoiding unnecessary memorization of earlier timesteps. E.5 Unsupervised CEB Pure unsupervised learning is fundamentally an ill-posed problem. Without knowing what the task is, it is impossible to define an optimal representation directly. We think that this core issue is what lead the authors of BID6 to prefer barely compressed representations. But by that line of reasoning, it seems that unsupervised learning devolves to lossless compression -perhaps the correct representation is the one that allows you to answer the question: " What is the color of the fourth pixel in the second row?"On the other hand, it also seems challenging to put the decision about what information should be kept into objective function hyperparameters, as in the β VAE and penalty VAE objectives. That work showed that it is possible to constrain the amount of information in the learned representation, but it is unclear how those objective functions keep only the "correct" bits of information for the downstream tasks you might care about. This is in contrast to all of the preceeding discussion, where the task clearly defines the both the correct amount of information and which bits are likely to be important. However, unsupervised representation learning is still an interesting problem, even if it is ill-posed. Our perspective on the importance of defining a task in order to constrain the information in the representation suggests that we can turn the problem into a data modeling problem in which the practitioner who selects the dataset also "models" the likely form of the useful bits in the dataset for the downstream task of interest. In particular, given a dataset X, we propose selecting a function f (X) → X that transforms X into a new random variable X. This defines a paired dataset, P(X, X), on which we can use CEB as normal. Note that choosing the identity function for f in maximal mutual information between X and X (H(X) nats), which will in a representation that is far from the MNI for normal downstream tasks. In other words, representations learned by true autoencoders are unlikely to be any better than simply using the raw X.It may seem that we have not proposed anything useful, as the selection of f is unconstrained, and seems much more daunting than selecting β in a β VAE or σ in a penalty VAE. However, there is a very powerful class of functions that makes this problem much simpler, and that also make it clear using CEB will only select bits from X that are useful. That class of functions is the noise functions. Given a dataset X without labels or other targets, and some set of tasks in mind to be solved by a learned representation, we may select a random noise variable U, and function X = f (X, U) that we believe will destroy the irrelevant information in X. We may then add representation variables Z X, Z X to the model, giving the joint distribution p(x, x, u, z X, z X) ≡ p(x)p(u)p(x | f (x, u))e(z X |x)b(z X |x). This joint distribution is represented in Figure 6.Denoising Autoencoders were originally proposed in BID43. In that work, the authors argue informally that reconstruction of corrupted inputs is a desirable property of learned representations. In this paper's notation, we could describe their proposed objective as min H(X|Z X), or equivalently min log d(x|z X = f (x, η)) x,η∼p(x)p(θ).Here we make this idea somewhat more formal through the MNI criterion and the derivation of CEB as the optimal objective for that criterion. We also note that, practically speaking, we would like to learn a representation that is consistent with uncorrupted inputs as well. Consequently, we are going to use a bidirectional model. CEB denoise ≡ min I(X; Z X |X) − I(X ; Z X) + I(X ; Z X |X) − I(X; Z X) ⇒ min −H(Z X |X) + H(Z X |X) + H(X |Z X) − H(Z X |X) + H(Z X |X) + H(X|Z X) This requires two encoders and two decoders, which may seem expensive, but it permits a consistent learned representation that can be used cleanly for downstream tasks. Using a single encoder/decoder pair would in either an encoder that does not work well with uncorrupted inputs, or a decoder that only generates noisy outputs. If you are only interested in the learned representation and not in generating good reconstructions, the objective simplifies to the first three terms. In that case, the objective is properly called a Noising CEB Autoencoder, as the model predicts the noisy X from X: CEB noise ≡ min I(X; Z X |X) − I(X ; Z X) ⇒ min −H(Z X |X) + H(Z X |X) + H(X |Z X) denoising CEB, introduced above. We present the assumed graphical model in FIG4. We give the corresponding Semi-Supervised CEB directly: DISPLAYFORM0 1 Y∈(X,Y) is the indicator function, equal to 1 when a Y is part of the paired data, and equal to 0 otherwise. In other words, if we have Y = y paired with a given X = x, we can include those terms in the objective. If we do not have that, we can simply leave them out. Note that it is straightforward to generalize this to semisupervised learning with two or more observations that are both being learned unsupervisedly, but also have some amount of paired data. For example, images and natural language, assuming we have a reasonable noise model for unsupervisedly learning natural language. Here we provide some visualizations of the Fashion MNIST tasks. In Figure 8, we show a trained 2D CEB latent representation of Fashion MNIST. The model learned to locate closely related concepts together, including the cluster of "shirt" classes near the center, and the cluster of "shoe" classes toward the lower right. In spite of the restriction to 2 dimensions, this model achieves ∼ 92% on the test set. In FIG6, the 10,000 test images and their 10,000 adversaries are shown for four of the models. It is easy to see at a glance that the CEB model organizes all of the adversaries into the "trousers" class, with a crisp devision between the true examples and the adversaries. In contrast, the two VIB models have adversaries mixed throughout. However, all three models are clearly preferable to the deterministic model, which has all of the adversaries mixed into the "trousers" class with no ability to distinguish between adversaries and true examples. | The Conditional Entropy Bottleneck is an information-theoretic objective function for learning optimal representations. | 1,223 | scitldr |
Deep representation learning has become one of the most widely adopted approaches for visual search, recommendation, and identification. Retrieval of such representations from a large database is however computationally challenging. Approximate methods based on learning compact representations, have been widely explored for this problem, such as locality sensitive hashing, product quantization, and PCA. In this work, in contrast to learning compact representations, we propose to learn high dimensional and sparse representations that have similar representational capacity as dense embeddings while being more efficient due to sparse matrix multiplication operations which can be much faster than dense multiplication. Following the key insight that the number of operations decreases quadratically with the sparsity of embeddings provided the non-zero entries are distributed uniformly across dimensions, we propose a novel approach to learn such distributed sparse embeddings via the use of a carefully constructed regularization function that directly minimizes a continuous relaxation of the number of floating-point operations (FLOPs) incurred during retrieval. Our experiments show that our approach is competitive to the other baselines and yields a similar or better speed-vs-accuracy tradeoff on practical datasets. Learning semantic representations using deep neural networks (DNN) is now a fundamental facet of applications ranging from visual search , semantic text matching , oneshot classification , clustering , and recommendation . The high-dimensional dense embeddings generated from DNNs however pose a computational challenge for performing nearest neighbor search in large-scale problems with millions of instances. In particular, when the embedding dimension is high, evaluating the distance of any query to all the instances in a large database is expensive, so that efficient search without sacrificing accuracy is difficult. Representations generated using DNNs typically have a higher dimension compared to hand-crafted features such as SIFT , and moreover are dense. The key caveat with dense features is that unlike bag-of-words features they cannot be efficiently searched through an inverted index, without approximations. Since accurate search in high dimensions is prohibitively expensive in practice , one has to typically sacrifice accuracy for efficiency by resorting to approximate methods. Addressing the problem of efficient approximate Nearest-Neighbor Search (NNS) or Maximum Inner-Product Search (MIPS) is thus an active area of research, which we review in brief in the related work section. Most approaches aim to learn compact lower-dimensional representations that preserve distance information. While there has been ample work on learning compact representations, learning sparse higher dimensional representations have been addressed only recently . As a seminal instance, propose an end-to-end approach to learn sparse and high-dimensional hashes, showing significant speed-up in retrieval time on benchmark datasets compared to dense embeddings. This approach has also been motivated from a biological viewpoint by relating to a fruit fly's olfactory circuit, thus suggesting the possibility of hashing using higher dimensions instead of reducing the dimensionality. Furthermore, as suggested by , sparsity can have additional advantages of linear separability and information disentanglement. In a similar vein, in this work, we propose to learn high dimensional embeddings that are sparse and hence efficient to retrieve using sparse matrix multiplication operations. In contrast to compact lowerdimensional ANN-esque representations that typically lead to decreased representational power, a key facet of our higher dimensional sparse embeddings is that they can have the same representational capacity as the initial dense embeddings. The core idea behind our approach is inspired by two key observations: (i) retrieval of d (high) dimensional sparse embeddings with fraction p of non-zero values on an average, can be sped up by a factor of 1/p. (ii) The speed up can be further improved to a factor of 1/p 2 by ensuring that the non-zero values are evenly distributed across all the dimensions. This indicates that sparsity alone is not sufficient to ensure maximal speedup; the distribution of the non-zero values plays a significant role as well. This motivates us to consider the effect of sparsity on the number of floating point operations (FLOPs) required for retrieval with an inverted index. We propose a penalty function on the embedding vectors that is a continuous relaxation of the exact number of FLOPs, and encourages an even distribution of the non-zeros across the dimensions. We apply our approach to the large scale metric learning problem of learning embeddings for facial images. Our training loss consists of a metric learning loss aimed at learning embeddings that mimic a desired metric, and a FLOPs loss to minimize the number of operations. We perform an empirical evaluation of our approach on the Megaface dataset , and show that our proposed method successfully learns high-dimensional sparse embeddings that are orders-of-magnitude faster. We compare our approach to multiple baselines demonstrating an improved or similar speed-vs-accuracy trade-off. The rest of the paper is organized as follows. In Section 3 we analyze the expected number of FLOPs, for which we derive an exact expression. In Section 4 we derive a continuous relaxation that can be used as a regularizer, and optimized using gradient descent. We also provide some analytical justifications for our relaxation. In Section 5 we then compare our method on a large metric learning task showing an improved speed-accuracy trade-off compared to the baselines. Learning Compact Representations, ANN. Exact retrieval of the top-k nearest neighbours is expensive in practice for high-dimensional dense embeddings learned from deep neural networks, with practitioners often resorting to approximate nearest neighbours (ANN) for efficient retrieval. Popular approaches for ANN include Locality sensitive hashing (LSH) (; ;) relying on random projections, Navigable small world graphs (NSW) and hierarchical NSW (HNSW) based on constructing efficient search graphs by finding clusters in the data, Product Quantization (PQ) approaches which decompose the original space into a cartesian product of low-dimensional subspaces and quantize each of them separately, and Spectral hashing which involves an NP hard problem of computing an optimal binary hash, which is relaxed to continuous valued hashes, admitting a simple solution in terms of the spectrum of the similarity matrix. Overall, for compact representations and to speed up query times, most of these approaches use a variety of carefully chosen data structures, such as hashes , locality sensitive hashes , inverted file structure , trees , clustering , quantization sketches , as well as dimensionality reductions based on principal component analysis and t-SNE . End to End ANN. Learning the ANN structure end-to-end is another thread of work that has gained popularity recently. propose to learn binary representations for the Hamming metric by minimizing a margin based triplet loss. use the signed output of a deep neural network as hashes, while imposing independence and orthogonality conditions on the hash bits. Other end-to-end learning approaches for learning hashes include . An advantage of end-to-end methods is that they learn hash codes that are optimally compatible to the feature representations. Sparse Representations. Sparse representations have been previously studied from various viewpoints. explore sparse neural networks in modeling biological neural networks and show improved performance, along with additional advantages such as better linear separability and information disentangling. Ranzato et al. (2008; ; propose learning sparse features using deep belief networks. explore sparse coding with an overcomplete basis, from a neurobiological viewpoint. Sparsity in auto-encoders have been explored by ; . provide sufficient conditions to learn sparse representations, and also further provide an excellent review of sparse autoencoders. Dropout and a number of its variants (; ;) have also been shown to impose sparsity in neural networks. High-Dimensional Sparse Representations. Sparse deep hashing (SDH) is an end-to-end approach that involves starting with a pre-trained network and then performing alternate minimization consisting of two minimization steps, one for training the binary hashes and the other for training the continuous dense embeddings. The first involves computing an optimal hash best compatible with the dense embedding using a min-cost-max-flow approach. The second step is a gradient descent step to learn a dense embedding by minimizing a metric learning loss. A related approach, k-sparse autoencoders learn representations in an unsupervised manner with at most k non-zero activations. The idea of high dimensional sparse embeddings is also reinforced by the sparse-lifting approach where sparse high dimensional embeddings are learned from dense features. The idea is motivated by the biologically inspired fly algorithm . Experimental indicated that sparse-lifting is an improvement both in terms of precision and speed, when compared to traditional techniques like LSH that rely on dimensionality reduction. 1 regularization, Lasso. The Lasso is the most popular approach to impose sparsity and has been used in a variety of applications including sparsifying and compressing neural networks. The group lasso is an extension of lasso that encourages all features in a specified group to be selected together. Another extension, the exclusive lasso , on the other hand, is designed to select a single feature in a group. Our proposed regularizer, originally motivated by idea of minimizing FLOPs closely resembles exclusive lasso. Our focus however is on sparsifying the produced embeddings rather than sparsifying the parameters. Sparse Matrix Vector Product (SpMV). Existing work for SpMV computations include , proposing algorithms based on inverted indices. Inverted indices are however known to suffer from severe cache misses. Linear algebra back-ends such as BLAS rely on efficient cache accesses to achieve speedup.;; propose cache efficient algorithms for sparse matrix vector products. There has also been substantial interest in speeding up SpMV computations using specialized hardware such as GPUs (; Vázquez et al., 2011), FPGAs , and custom hardware . Metric Learning. While there exist many settings for learning embeddings (; ;) in this paper we restrict our attention to the context of metric learning . Some examples of metric learning losses include large margin softmax loss for CNNs , triplet loss , and proxy based metric loss . In this section we study the effect of sparsity on the expected number of FLOPs required for retrieval and derive an exact expression for the expected number of FLOPs. The main idea in this paper is based on the key insight that if each of the dimensions of the embedding are non-zero with a probability p (not necessarily independently), then it is possible to achieve a speedup up to an order of 1/p 2 using an inverted index on the set of embeddings. Consider two embedding vectors u, v. Computing u T v requires computing only the pointwise product at the indices k where both u k and v k are non-zero. This is the main motivation behind using inverted indices and leads to the aforementioned speedup. Before we analyze it more formally, we introduce some notation. be a set of n independent training samples drawn from Z = X × Y according to a distribution P, where X, Y denote the input and label spaces respectively. Let F = {f θ : X → R d | θ ∈ Θ} be a class of functions parameterized by θ ∈ Θ, mapping input instances to d-dimensional embeddings. Typically, for image tasks, the function is chosen to be a suitable CNN . Suppose X, Y ∼ P, then define the activation probability p j = P(f θ (X) j = 0), and its empirical versionp j = We now show that sparse embeddings can lead to a quadratic speedup. Consider a d-dimensional sparse query vector u q = f θ (x q) ∈ R d and a database of n sparse vectors.., n) are sampled independently from P. Computing the vector matrix product Du q requires looking at only the columns of D corresponding to the non-zero entries of u q given by 1 Furthermore, in each of those columns we only need to look at the non-zero entries. This can be implemented efficiently in practice by storing the non-zero indices for each column in independent lists, as depicted in Figure 1a. The number of FLOPs incurred is given by, Taking the expectation on both sides w.r.t. x q, x (i) and using the independence of the data, we get where X ∼ P is an independent random sample. Since the expected number of FLOPs scales linearly with the number of vectors in the database, a more suitable quantity is the mean-FLOPs-per-row defined as Note that for a fixed amount of sparsity this is minimized when each of the dimensions are non-zero with equal probability 2 (so that as a regularizer, F(f θ, P) will in turn encourage such a uniform distribution across dimensions). Given such a uniform distribution, compared to dense multiplication which has a complexity of O(d) per row, we thus get an improvement by a factor of 1/p 2 (p < 1). Thus when only p fraction of all the entries is non-zero, and evenly distributed across all the columns, we achieve a speedup of 1/p 2. Note that independence of the non-zero indices is not necessary due to the linearity of expectation. FLOPs versus Speedup. While FLOPs reduction is a reasonable measure of speedup on primitive processors of limited parallelization and cache memory. FLOPs is not an accurate measure of actual speedup when it comes to mainstream commercial processors such as Intel's CPUs and Nvidia's GPUs, as the latter have cache and SIMD (Single-Instruction Multiple Data) mechanism highly optimized for dense matrix multiplication, while sparse matrix multiplication are inherently less tailored to their cache and SIMD design . On the other hand, there have been threads of research on hardwares with cache and parallelization tailored to sparse operations that show speedup proportional to the FLOPs reduction . Modeling the cache and other hardware aspects can potentially lead to better performance but less generality and is left to our future works. (a) The colored cells denote non-zero entries, and the arrows indicate the list structure for each of the columns, with solid arrows denoting links that were traversed for the given query. The green and grey cells denote the non-zero entries that were accessed and not accessed, respectively. The non-zero values in Duq (blue) can be computed using only the common non-zero values (green). 1: (Build Index) 2: Input: Sparse matrix D. (stores the non-zero values and their indices as a list) 6: end for 7: 8: (Query) 9: Input: Sparse query u q. 10: Init score vector end for 15: end for 16: return s The 1 regularization is the most common approach to induce sparsity. However, as we will also verify experimentally, it does not ensure an uniform distribution of the non-zeros in all the dimensions that is required for the optimal speed-up. Therefore, we resort to incorporating the actual FLOPs incurred, directly into the loss function which will lead to an optimal trade-off between the search time and accuracy. The FLOPs F(f θ, P) being a discontinuous function of model parameters, is hard to optimize, and hence we will instead optimize using a continuous relaxation of it. Denote by (f θ, D), any metric loss on D for the embedding function f θ. The goal in this paper is to minimize the loss while controlling the expected FLOPs F(f θ, P) defined in Eqn. 2. Since the distribution P is unknown, we use the samples to get an estimate of F(f θ, P). Recall the empirical fraction of non-zero, which converges in probability to p j. Therefore, a consistent estimator for F(f θ, P) based on the samples D is given by F(f θ, D) = d j=1p 2 j. Note that F denotes either the empirical or population quantities depending on whether the functional argument is P or D. We now consider the following regularized loss. for some parameter λ that controls the FLOPs-accuracy tradeoff. The regularized loss poses a further hurdle, asp j and consequently F(f θ, D) are not continuous due the presence of the indicator functions. We thus compute the following continuous relaxation. Define the mean absolute activation a j = E[|f θ (X) j |] and its empirical versionā j = 1 n n i=1 |f θ (x i) j |, which is the 1 norm of the activations (scaled by 1/n) in contrast to the 0 quasi norm in the FLOPs calculation. Define the relaxations, F(f θ, P) =. We propose to minimize the following relaxation, which can be optimized using any off-the-shelf stochastic gradient descent optimizer. min Sparse Retrieval. During inference, the sparse vector of a query image is first obtained from the learned model and the nearest neighbour is searched in a database of sparse vectors forming a sparse matrix. An efficient algorithm to compute the dot product of the sparse query vector with the sparse matrix is presented in Figure 1b. This consists of first building a list of the non-zero values and their positions in each column. As motivated in Section 3, given a sparse query vector, it is sufficient to only iterate through the non-zero values and the corresponding columns. Using the scores from the above step, a shortlist of candidates having the top scores is first constructed. The shortlisted candidates are further re-ranked using the dense embeddings. The number of candidates is chosen such that the dense re-ranking time does not dominate the sparse ranking time. Comparison to SDH . It is instructive to contrast our approach with that of SDH . In contrast to the binary hashes in SDH, our approach learns sparse real valued representations. SDH uses a min-cost-max-flow approach in one of the training steps, while we train ours only using SGD. During inference in SDH, a shortlist of candidates is first created by considering the examples in the database that have hashes with non-empty intersections with the query hash. The candidates are further re-ranked using the dense embeddings. The shortlist in our approach on the other hand is constituted of the examples with the top scores from the sparse embeddings. Comparison to unrelaxed FLOPs regularizer. We provide an experimental comparison of our continuous relaxation based FLOPs regularizer to its unrelaxed variant, showing that the performance of the two are markedly similar. Setting up this experiment requires some analytical simplifications based on recent DNN analyses. We first recall recent that indicate that the output of a batch norm layer nearly follows a Gaussian distribution , so that in our context, we could make the simplifying approximation that, ρ is the ReLU activation, and where we suppress the dependency of µ j and σ j on P. We experimentally verify that this assumption holds by minimizing the KS distance between the CDF of ρ(X) with X ∼ N (µ, σ 2) and the empirical CDF of the activations, with respect to µ, σ. Figure 2a shows the empirical CDF and the fitted CDF of ρ(X) for two different architectures. While µ j, σ j cannot be tuned independently for j ∈ [d] due to their dependence on θ, consider a further simplification where the parameters are independent of each other. Suppose for j ∈ {1, 2}, f θ (X) j = ReLU(X) where X ∼ N (µ j, σ 2 j), and θ = (µ 1, µ 2, σ 1, σ 2). We analyze how minimizing F(f θ, P) compares to minimizing F(f θ, P). Note that we consider the population quantities here instead of the empirical quantities, as they are more amenable to theoretical analyses. We also consider the 1 regularizer as a baseline. We initialize with θ = (µ 1, µ 2, σ 1, σ 2) = (−1/4, −1.3, 1, 1), and minimize the three quantities w.r.t. θ via gradient descent with infinitesimally small learning rates. For this contrastive analysis, we do not consider the effect of the metric loss. Note that while the empirical quantity F(f θ, D) cannot be optimized via gradient descent, it is possible to do so for its population counterpart F(f θ, P) since it is available in closed form when making Gaussian assumptions. The details of computing the gradients can be found in Appendix A. Figure 2b shows the trajectory of the activation probabilities (p 1, p 2) during optimization. We start with (p 1, p 2) = (0.4, 0.1), and plot the trajectory taken when performing gradient descent. Without the effect of the metric loss, the probabilities are expected to go to zero as observed in the figures. It can be seen that, in contrast to the 1 -regularizer, F and F tend to sparsify the less sparse activation (p 1) at a faster rate, which corroborates the fact that they encourage an even distribution of non-zeros. F promotes orthogonality. We next show that when the embeddings are normalized to have a unit norm, as typically done in metric learning, then minimizing F(f θ, D) is equivalent to promoting orthogonality on the absolute values of the embedding vectors. Let f θ (x) 2 = 1, ∀x ∈ X, we then have the following: is minimized when the vectors {|f θ (x i)|} n i=1 are orthogonal. Metric learning losses aim at minimizing the interclass dot product, whereas the FLOPs regularizer aims at minimizing pairwise dot products irrespective of the class, leading to a tradeoff between sparsity and accuracy. This approach of pushing the embeddings apart, bears some resemblance to the idea of spreading vectors where an entropy based regularizer is used to uniformly distribute the embeddings on the unit sphere, albeit without considering any sparsity. Maximizing the pairwise dot product helps in reducing FLOPs as is illustrated by the following toy example. Consider a set of d Then p,q∈ [1:d] |v p |, |v q | is minimized when v p = e p, where e p is an one-hot vector with the p th entry equal to 1 and the rest Figure (a) shows that the CDF of the activations (red) closely resembles the CDF of ρ(X) (blue) where X is a Gaussian random variable. Figure (b) shows that F and F behave similarly by sparsifying the less sparser activation at a faster rate when compared to the 1 regularizer. 0. The FLOPs regularizer thus tends to spread out the non-zero activations in all the dimensions, thus producing balanced embeddings. This simple example also demonstrates that when the number of classes in the training set is smaller or equal to the number of dimensions d, a trivial embedding that minimizes the metric loss and also achieves a small number of FLOPs is f θ (x) = e y where y is true label for x. This is equivalent to predicting the class of the input instance. The caveat with such embeddings is that they might not be semantically meaningful beyond the specific supervised task, and will naturally hurt performance on unseen classes, and tasks where the representation itself is of interest. In order to avoid such a collapse in our experiments, we ensure that the embedding dimension is smaller than the number of training classes. Furthermore, as recommended by , we perform all our evaluations on unseen classes. We evaluate our proposed approach on a large scale metric learning dataset: the Megaface used for face recognition. This is a much more fine grained retrieval tasks (with 85k classes for training) compared to the datasets used by. This dataset also satisfies our requirement of the number of classes being orders of magnitude higher than the dimensions of the sparse embedding. As discussed in Section 4, a few number of classes during training can lead the model to simply learn an encoding of the training classes and thus not generalize to unseen classes. Face recognition datasets avoid this situation by virtue of the huge number of training classes and a balanced distribution of examples across all the classes. consisting of 1 million images spanning 85k classes. We evaluate with 1 million distractors from the Megaface dataset and 3.5k query images from the Facescrub dataset , which were not seen during training. Network Architecture. We experiment with two architectures: MobileFaceNet , and ResNet-101 . We use ReLU activations in the embedding layer for MobileFaceNet, and SThresh activations for ResNet. The activations are 2 -normalized to produce an embedding on the unit sphere, and used to compute the Arcface loss . We learn 1024 dimensional sparse embeddings for the 1 and F regularizers; and 128, 512 dimensional dense embeddings as baselines. All models were implemented in Tensorflow with the sparse retrieval algorithm implemented in C++. 2 The re-ranking step used 512-d dense embeddings. Activation Function. In practice, having a non-linear activation at the embedding layer is crucial for sparsification. Layers with activations such as ReLU are easier to sparsify due to the bias parameter in the layer before the activation (linear or batch norm) which acts as a direct control knob to the sparsity. More specifically, ReLU(x − λ) can be made more (less) sparse by increasing (decreasing) the components of λ, where λ is the bias parameter of the previous linear layer. In this paper we consider two types of activations: ReLU(x) = max(x, 0), and the soft thresholding operator SThresh(x) = sgn(x) max(|x|−1/2, 0) . ReLU activations always produce positive values, whereas soft thresholding can produce negative values as well. Practical Considerations. In practice, setting a large regularization weight λ from the beginning is harmful for training. Sparsifying too quickly using a large λ leads to many dead activations (saturated to zero) in the embedding layer and the model getting stuck in a local minima. Therefore, we use an annealing procedure and gradually increase λ throughout the training using a regularization weight schedule λ(t): N → R that maps the training step to a real valued regularization weight. In our experiments we choose a λ(t) that increases quadratically as λ(t) = λ(t/T) 2, until step t = T, where T is the threshold step beyond which λ(t) = λ. Baselines. We compare our proposed F-regularizer, with multiple baselines: exhaustive search with dense embeddings, sparse embeddings using 1 regularization, Sparse Deep Hashing (SDH) , and PCA, LSH, PQ applied to the 512 dimensional dense embeddings from both the architectures. We train the SDH model using the aforementioned architectures for 512 dimensional embeddings, with number of active hash bits k = 3. We use numpy (using efficient MKL optimizations in the backend) for matrix multiplication required for exhaustive search in the dense and PCA baselines. We use the CPU version of the Faiss library for LSH and PQ (we use the IVF-PQ index from Faiss). Further details on the training hyperparameters and the hardware used can be found in Appendix B. We report the recall and the time-per-query for various hyperparameters of our proposed approach and the baselines, yielding trade-off curves. The reported times include the time required for re-ranking. The trade-off curves for MobileNet and ResNet are shown in Figures 3a and 3c respectively. We observe that while vanilla 1 regularization is an improvement by itself for some hyperparameter settings, the F regularizer is a further improvement, and yields the most optimal trade-off curve. SDH has a very poor speed-accuracy trade-off, which is mainly due to the explosion in the number of shortlisted candidates with increasing number of active bits leading to an increase in the retrieval time. On the other hand, while having a small number of active bits is faster, it leads to a smaller recall. For the other baselines we notice the usual order of performance, with PQ having the best speed-up compared to LSH and PCA. While dimensionality reduction using PCA leads to some speed-up for relatively high dimensions, it quickly wanes off as the dimension is reduced even further. We also report the sub-optimality ratio R sub = F(f θ, D)/dp 2 computed over the dataset D, wherē p = 1 d d j=1p j is the mean activation probability estimated on the test data. Notice that R ≥ 1, and the optimal R = 1 is achieved whenp j =p, ∀j ∈ [1 : d], that is when the non-zeros are evenly distributed across the dimensions. The sparsity-vs-suboptimality plots for MobileNet and ResNet are shown in Figures 3a and 3c respectively. We notice that the F-regularizer yields values of R closer to 1 when compared to the 1 -regularizer. For the MobileNet architecture we notice that the 1 regularizer is able to achieve values of R close to that of F in the less sparser region. However, the gap increases substantially with increasing sparsity. For the ResNet architecture on the other hand the 1 regularizer yields extremely sub-optimal embeddings in all regimes. The F regularizer is therefore able to produce more balanced distribution of non-zeros. The sub-optimality is also reflected in the recall values. The gap in the recall values of the 1 and F models is much higher when the sub-optimality gap is higher, as in the case of ResNet, while it is small when the sub-optimality gap is smaller as in the case of MobileNet. This shows the significance of having a balanced distribution of non-zeros. In this paper we proposed a novel approach to learn high dimensional embeddings with the goal of improving efficiency of retrieval tasks. Our approach integrates the FLOPs incurred during retrieval into the loss function as a regularizer and optimizes it directly through a continuous relaxation. We provide further insight into our approach by showing that the proposed approach favors an even distribution of the non-zero activations across all the dimensions. We experimentally showed that our approach indeed leads to a more even distribution when compared to the 1 regularizer. We compared our approach to a number of other baselines and showed that it has a better speed-vs-accuracy trade-off. Overall we were able to show that sparse embeddings can be around 50× faster compared to dense embeddings without a significant loss of accuracy. Proof. Follows directly from Lemma 3. Lemma 5. Proof. Follows directly from Lemma 2. All images were resized to size 112 × 112 and aligned using a pre-trained aligner 3. For the Arcloss function, we used the recommended parameters of margin m = 0.5 and temperature s = 64. We trained our models on 4 NVIDIA Tesla V-100 GPUs using SGD with a learning rate of 0.001, momentum of 0.9. Both the architectures were trained for a total of 230k steps, with the learning rate being decayed by a factor of 10 after 170k steps. We use a batch size of 256 and 64 per GPU for MobileFaceNet for ResNet respectively. Pre-training in SDH is performed in the same way as described above. The hash learning step is trained on a single GPU with a learning rate of 0.001. The ResNet model is trained for 200k steps with a batch size of 64, and the MobileFaceNet model is trained for 150k steps with a batch size of 256. We set the number of active bits k = 3 and a pairwise cost of p = 0.1. Hyper-parameters for MobileNet models. Re-ranking. We use the following heuristic to create the shortlist of candidates after the sparse ranking step. We first shortlist all candidates with a score greater than some confidence threshold. For our experiments we set the confidence threshold to be equal to 0.25. If the size of this shortlist is larger than k, it is further shrunk by consider the top k scorers. For all our experiments we set k = 1000. This heuristic avoids sorting the whole array, which can be a bottleneck in this case. The parameters are chosen such that the time required for the re-ranking step does not dominate the total retrieval time. 1. All models were trained on 4 NVIDIA Tesla V-100 GPUs with 16G of memory. 2. System Memory: 256G. 3. CPU: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz. 4. Number of threads: 32. 5. Cache: L1d cache 32K, L1i cache 32K, L2 cache 256K, L3 cache 46080K. All timing experiments were performed on a single thread in isolation. C ADDITIONAL C.1 WITHOUT RE-RANKING Figure 4 shows the comparison of the approaches with and without re-ranking. We notice that there is a significant dip in the performance without re-ranking with the gap being smaller for ResNet with FLOPs regularization. We also notice that the FLOPs regularizers has a better trade-off curve for the no re-ranking setting as well. In the main text we have reported the recall@1 which is a standard face recognition metric. This however is not sufficient to ensure good face verification performance. The goal in face verification is to predict whether two faces are similar or dissimilar. A natural metric in such a scenario is the FPR-TPR curve. Standard face verification datasets include LFW and AgeDB . We produce embeddings using our trained models, and use them to compute similarity scores (dot product) for pairs of images. The similarity scores are used to compute the FPR-TPR curves which are shown in Figure 5. We notice that for curves with similar probability of activation p, the FLOPs regularizer performs better compared to 1. This demonstrates the efficient Figure 5: FPR-TPR curves. The 1 curves are all shown in shades of red, where as the FLOPs curves are all shown in shades of blue. The probability of activation is provided in the legend for comparison. For curves with similar probability of activation p, the FLOPs regularizer performs better compared to 1, thus demonstrating that the FLOPs regularizer learns richer representations for the same sparsity. utilization of all the dimensions in the case of the FLOPs regularizer that helps in learning richer representations for the same sparsity. We also observe that the gap between sparse and dense models is smaller for ResNet, thus suggesting that the ResNet model learns better representations due to increased model capacity. Lastly, we also note that the gap between the dense and sparse models is smaller for LFW compared to AgeDB, thus corroborating the general consensus that LFW is a relatively easier dataset. We also experimented with the Cifar-100 dataset consisting of 60000 examples and 100 classes. Each class consists of 500 train and 100 test examples. We compare the 1 and FLOPs regularized approaches with the sparse deep hashing approach. All models were trained using the triplet loss and embedding dim d = 64. For the dense and DH baselines, no activation was used on the embeddings. For the 1 and FLOPs regularized models we used the SThresh activation. Similar to , the train-test and test-test precision values have been reported in Table 1. Furthermore, the reported are without re-ranking. Cifar-100 being a small dataset, we only report the FLOPs-per-row, as time measurements can be misleading. In our experiments, we achieved slightly higher precisions for the dense model compared to . We notice that our models use less than 50% of the computation compared to SDH, albeit with a slightly lower precision. Table 1: Cifar-100 using triplet loss and embedding size d = 64. For 1 and F models, no re-ranking was used. F is used to denote the FLOPs-per-row (lower is better). | We propose an approach to learn sparse high dimensional representations that are fast to search, by incorporating a surrogate of the number of operations directly into the loss function. | 1,224 | scitldr |
Model interpretability and systematic, targeted model adaptation present central challenges in deep learning. In the domain of intuitive physics, we study the task of visually predicting stability of block towers with the goal of understanding and influencing the model's reasoning. Our contributions are two-fold. Firstly, we introduce neural stethoscopes as a framework for quantifying the degree of importance of specific factors of influence in deep networks as well as for actively promoting and suppressing information as appropriate. In doing so, we unify concepts from multitask learning as well as training with auxiliary and adversarial losses. Secondly, we deploy the stethoscope framework to provide an in-depth analysis of a state-of-the-art deep neural network for stability prediction, specifically examining its physical reasoning. We show that the baseline model is susceptible to being misled by incorrect visual cues. This leads to a performance breakdown to the level of random guessing when training on scenarios where visual cues are inversely correlated with stability. Using stethoscopes to promote meaningful feature extraction increases performance from 51% to 90% prediction accuracy. Conversely, training on an easy dataset where visual cues are positively correlated with stability, the baseline model learns a bias leading to poor performance on a harder dataset. Using an adversarial stethoscope, the network is successfully de-biased, leading to a performance increase from 66% to 88%. Intuitive physics in the deep learning community describes physical understanding acquired by neural networks in a data-driven as opposed to a rule-based manner: With an increasing amount of training examples, we expect an algorithm to develop a better understanding of its (physical) environment, especially when the task it is trained on is inherently linked to the physical rules governing the scene. However, what type of understanding the network develops highly depends on the types of scenarios it is confronted with and the task it is trying to solve. Furthermore, it depends on the network architecture, on regularisation techniques, on the training procedure, etc. As a , in contrast to a rule-based approach, it is often hard to assess what form of physical understanding a neural network has developed. We are specifically interested in whether the network uses visual cues as shortcuts which reflect correlations in the dataset but are incommensurate with the underlying laws of physics the network was intended to learn. In this paper, we specifically focus on stability prediction of block towers, a task which has gained interest in both the deep learning BID10 BID22 BID8 and the robotics community in recent years BID11 b). Images of towers of blocks stacked on top of each other are shown to a neural network. Its task is to predict whether the tower will fall over or not ing in a binary classification problem. End-to-end learning approaches as well as simulation-based approaches achieve super-human performance on a real dataset BID10 BID22 BID8. However, with investigation of trained deep learning models limited to occlusion-based attention analyses BID10 BID8, it is not clear to what extent neural networks trained on this task take into account physical principles such as centre-of-mass or whether they follow visual cues instead. To this end, we introduce a variation of the ShapeStacks dataset presented by BID8 which facilitates the analysis of the effects of visual cues on the learning process. The stethoscope framework. The main network (blue), comprised of an encoder and a decoder, is trained for global stability prediction of block towers. The stethoscope (orange), a two layered perceptron, is trained to predict a nuisance parameter (local stability) where the input is Z, a learned feature from an arbitrary layer of the main network. The stethoscope loss is back-propagated with weighting factor λ to the main network. The value of λ determines whether the stethoscope operates in analytic (λ " 0), auxiliary (λ ą 0) or adversarial manner (λ ă 0).Motivated by the need for an effective tool to understand and guide the physical reasoning of the neural network and inspired by prior research in interpretability, multi-task learning and adversarial training, we present neural stethoscopes as a unified framework for the interrogation and perturbation of task-specific information at any layer. A stethoscope can be deployed in a purely analytic fashion whereby a question is posed via a stethoscope loss which is not propagated back into the main network. It can also be used to promote or suppress specific information by deploying either an auxiliary or an adversarial training mechanism. The concept is illustrated in FIG0. We demonstrate that deploying an auxiliary stethoscope can be used to promote information conducive to the main task improving overall network performance. Conversely, we show that an adversarial stethoscope can mitigate a specific bias by effectively suppressing information. Moreover, the main network does not need to be changed in order to apply a neural stethoscope. In this work, we present two contributions: An in-depth analysis of the state-of-the-art approach for intuitive stability prediction. To that end, we also introduce an extension to the existing ShapeStacks dataset which will be made publicly available. A framework for interpreting, suppressing or promoting extraction of features specific to a secondary task unifying existing approaches from interpretability, auxiliary and adversarial learning. While we frame this work in the context of intuitive physics, questions regarding model interpretability and, consequently, systematic, targeted model adaptation find applicability in all domains of deep learning. For a study of two MNIST toy problems with neural stethoscopes, please see Appendix C. This work touches on two fields within deep learning: intuitive physics, specifically stability prediction, and targeted model adaptation/interpretation. Vision-based stability prediction of block towers has become a popular task for teaching and showcasing physical understanding of algorithms. Approaches include end-to-end deep learning algorithms BID10 BID8 as well as pipelines using computer vision to create an abstract state representation which can then be used by a physics simulator BID5 BID22. BID5 achieve impressive with a pipeline approach to robotic stone stacking. Interestingly, on a publicly available real-world dataset of block tower images BID10, the state-of-the-art is shared between an end-to-end learning approach BID8 ) and a pipeline method using a physics simulator BID22. While being much easier to implement, end-to-end approaches have the downside of being significantly harder to interpret. Interpretability brings at least two advantages: Trust by understanding the model's reasoning and therefore also its potential failure cases. Identification of potentials to improve the model. Both BID10 and BID8 conduct occlusion-based attention analyses. BID8 find that the focus of the algorithm's attention lies within a bounding box around the stability violation in 80% of the cases. While encouraging, which can be drawn regarding the network's understanding of physical principles are limited. Moreover, BID16 shows that attention analyses can be misleading: The Grad-CAM visualisation does not change even for artificially crafted adversarial examples which maximally confuse the classifier. Neural Stethoscopes The notion of passively interrogating and actively influencing feature representations in hidden layers of neural networks connects disparate fields including interpretability, auxiliary losses and multitask learning as well as adversarial training. We note that much of the required machinery for neural stethoscopes already exists in a number of sub-domains of deep learning. Work on auxiliary objectives BID9 as well as multi-task learning (e.g. BID3 BID4) commonly utilises dedicated modules with losses targeted towards a variety of tasks in order to optimise a representation shared across tasks. Based on this notion both deep supervision BID21 and linear classifier probes BID1 reinforce the original loss signal at various levels throughout a network stack. Although their work is restricted to reinforcing the learning signal via the same loss applied to the global network BID1 in particular demonstrate that the accessibility of information at a given layer can be determined -and promotedby formulating a loss applied locally to that layer. Conversely, in order to encourage representations invariant to components of the input signal, regularisation techniques are commonly utilised (e.g. BID17). To obtain invariance with respect to known systematic changes between training and deployment data, BID6; BID23 propose methods for domain adaptation. In order to prevent a model fitting to a specific nuisance factor, BID13 minimise the Maximum Mean Discrepancy between conditional distributions with different values of a binary nuisance variable in the conditioning set. This method is limited to discrete nuisance variables and its computational cost scales exponentially with the number of states. BID24 address both issues via adversarial training, optimising an encoding to confuse an additional discriminator, which aims to determine the values of nuisance parameters. This approach assumes that the nuisance variable is known and is an input to the main model during training and deployment. BID14 follow a similar approach, applying the discriminator to the output of the main model instead of its intermediate representations. We use stethoscopes to analyse and influence the learning process on the task of stability prediction, but present it in the following as a general framework which can be applied to any set of tasks. In supervised deep learning, we typically look for a function f θ: X Ñ Y with parameters θ that maps an input x P X to its target y P Y. Often the function internally computes one or more intermediate representations z P Z of the data. In this case, we rewrite f θ as the composition of the encoder h enc θ: X Ñ Z, which maps the input to the corresponding features z P Z, and the decoder h dec θ: Z Ñ Y, which maps features to the output. In this work, we consider only classification tasks so that Y is a finite set of labels (or a simplex in the case of a probabilistic output), but our approach generalises directly also to regression tasks. The information present in the input x might support multiple different tasks. By introducing a supplementary task into our training framework we hope to improve the performance of our network at the primary task. However, the impact of the supplementary task on the primary task is often difficult to determine, and in some cases can even detriment performance on the primary task. To this end, we propose neural stethoscopes, which allow us to interrogate the relationships between primary and supplementary tasks. In addition, the framework provides a tool to improve the network's performance at the primary task in light of both beneficial and detrimental supplementary tasks through the introduction of auxiliary and adversarial losses respectively promoting or suppressing extraction of features related to the supplemental task by the encoder h is trained on a supplemental task with targets s P S and not for the main objective. We define two loss functions: L y pθq, which measures the discrepancy between predictions f θ and the true task y and L s pθ, ψq, which measures the performance on the supplemental task. The weights of the stethoscope are updated as´∆ψ 9 ∇ ψ L s pθ, ψq to minimise L s pθ, ψq and the weights of the main network as´∆θ 9 ∇ θ L y,s pθ, ψq to minimise the energy DISPLAYFORM0 By choosing different values for the constant λ we obtain three very different use cases:Analytic Stethoscope (λ " 0) Here, the gradients of the stethoscope, which acts as a passive observer, are not used to alter the main model. This setup can be used to interrogate learned feature representations: if the stethoscope predictions are accurate, the features can be used to solve the task. Auxiliary Stethoscope (λ ą 0) The encoder is trained with respect to the stethoscope objective, hence enforcing correlation between main network and supplemental task. This setup is related to learning with auxiliary tasks, and helpful if we expect the two tasks to be beneficially related. Adversarial Stethoscope (λ ă 0) By setting λ ă 0, we train the encoder to maximise the stethoscope loss (which the stethoscope still tries to minimise), thus encouraging independence between main network and supplemental tasks. This is effectively an adversarial training framework and is useful if features required to solve the stethoscope task are a detrimental nuisance factor. For the analytic stethoscope, to fairly compare the accessibility of information with respect to a certain task in different feature representations, we set two criteria: FORMULA0 The capacity of the stethoscope architecture has to be constant regardless of the dimensions of its input. FORMULA1 The stethoscope has to be able to access each neuron of the input separately. We guarantee this by fully connecting the input with the first layer of the stethoscope using a sparse matrix. This matrix has a constant number of non-zero entries (criterion 1) and connects every input unit as well as every output unit at least once (criterion 2). For a more detailed description, see Appendix A.In auxiliary and adversarial mode, we attach the stethoscope to the main network's last layer before the logits in a fully connected manner. This setup proved to have the highest impact on the learning process of the main network. The stethoscope itself is implemented as a two-layer perceptron with ReLU activation and trained with sigmoid or softmax cross-entropy loss on its task S.For numerical stability, the loss of the encoder in the adversarial setting is rewritten as DISPLAYFORM1 where Lspθ, ψq is the stethoscope loss with flipped labels. The objective is similar to the confusion loss formulation utilised in GANs to avoid vanishing gradients when the discriminator's performance is high BID7. Previous work has shown that neural networks are highly capable of learning physical tasks such as stability prediction. However, unlike approaches using physics simulators BID5 BID22, with pure-learning based approaches, it is hard to assess what reasoning they follow and whether they gain a sound understanding of the physical principles or whether they learn to take short-cuts following visual cues based on correlations in the training data. Occlusion-based attention analyses are a first step in this direction, but the insights gained from this are limited BID10 BID8. In this section, we follow the state-of-art approach on visual stability prediction of block towers and examine as well as influence its learning behaviour. We introduce a variation of the ShapeStacks dataset from BID8 which is particularly suited to study the dependence of network predictions on visual cues. We then examine how suppressing or promoting the extraction of certain features influences the performance of the network using neural stethoscopes. Dataset As shown in BID8, a single-stranded tower of blocks is stable if, and only if, at every interface between two blocks the centre of mass of the entire tower above is supported by the convex hull of the contact area. If a tower satisfies this criterion, i.e., it does not collapse, we call it globally stable. To be able to quantitatively assess how much the algorithm follows visual cues, we introduce a second label: We call a tower locally stable if, and only if, at every interface between two blocks, the centre of mass of the block immediately above is supported by the convex hull of the contact area. Intuitively, this measure describes, if taken on its own without any blocks above, each block would be stable. We associate binary prediction tasks y G and y L to respective global and local stability where label y " 0 indicates stability and y " 1 instability. Global and local instability are neither mutually necessary nor sufficient, but can easily be confused visually which is demonstrated by our experimental . Based on the two factors of local and global stability, we create a simulated dataset 1 with 4,000 block tower scenarios divided into four qualitative categories (cf. FIG2 . The dataset is divided into an easy subset, where local and global stability are always positively correlated, and a hard subset, where this correlation is always negative. The dataset will be made available online. Model We choose the Inception-v4 network BID18 as it yields state-of-the-art performance on stability prediction BID8 . The model is trained in a supervised setting using example tuples px, y G, y L q consisting of an image x and its global and local stability labels y G and y L . Classification losses use sigmoid cross entropy. We use the RMSProp optimiser BID19) throughout all our experiments. Local Stability as a Visual Cue Based on the four categories of scenarios described in FIG2, we conduct an initial set of experiments to gauge the influence of local stability on the network predictions. If local stability had, as it would be physically correct, no influence on the network's prediction of global stability, the performance of the network should be equal for easy and hard scenarios, regardless of the training split. However, FIG3 shows a strong influence of local stability on the prediction performance. When trained on the entire, balanced data set, the error rate is three times higher for hard than for easy scenarios (6% vs. 2%). When trained on easy scenarios only, the error rate even differs by a factor of 13. Trained on hard scenarios only, the average performance across all four categories is on the level of random chance (51%), indicating that negatively correlated local and global stability imposes a much harder challenge on the network. After demonstrating the influence of local stability on the task of global stability prediction we turn our attention to the use of neural stethoscopes to quantify and actively mitigate this influence.5a 5b 5c 5d 6a 6b 6c 6d 6e 6f 6g 6h 7a 7b 7c 7d layer 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 accuracy local stability origin of global instability global stability Figure 4: Analysis of the prediction performance throughout the Inception-v4 network trained on predicting global stability. We report average performances on balanced test data after 50 epochs of stethoscope training. All stethoscopes have been attached to the respective activation tensors 3 with sparse connection matrices as described in Section 3.Task Relationship Analysis We seek to quantify the correlation of features extracted for global stability prediction with the task of local instability detection. We train the main network on global stability prediction while attaching stethoscopes to multiple layers. The stethoscope is trained on three tasks: global stability prediction (binary), local stability (binary) and origin of global instability, particularly the interface at which this occurs (n-way one-hot). Figure 4 reveals that the network gradually builds up features which are conducive to both global and local stability prediction. However, the performance of local stability prediction peaks at the activation tensor after layer 7a whereas the performance for global stability prediction (both binary and n-way) keeps improving. This is in line with the governing physical principles of the scenarios and yields the conjecture that the information about local stability can be considered a nuisance factor whereas information about the global site of stability violation can serve as a complementary factor for the main task. We now test the hypothesis that fine-grained labels of instability locations help the main network to grasp the correct physical concepts. To that end, The main network is trained on binary global stability labels while the stethoscope is trained on more fine grained labels -origin of global stability (n-way). The network was trained on hard scenarios only but evaluated on all. The dashed lines represent baselines for λ " 0.we consider the setup from FIG3 where the training data only consists of hard scenarios with a baseline performance of 51%. The main network is trained on global stability while the stethoscope is trained on predicting the origin of global instability, namely the interface at which the instability occurs. FIG4 shows that auxiliary training substantially improves the performance for weighting parameters λ P r0.5, 16s. However, for very small values of λ, the contribution of the additional loss term is too small while for large values, performance deteriorates to the level of random chance as a of the primary task being far out-weighted by the auxiliary task. FIG3 indicate that the network might use local stability as a visual cue to make biased assumptions about global stability. We now investigate whether it is possible to debias the network by forcing it to pay less attention to local stability. To that end, we focus on the scenario shown in FIG3, where we only train the network on global stability labels for easy scenarios. As shown in FIG3, the performance of the network suffers significantly when tested on hard scenarios where local and global stability labels are inversely correlated. The hypothesis is that forcing the network not to focus on local stability weakens this bias. In FIG6, we use active stethoscopes (λ ‰ 0) to test this hypothesis. We train a stethoscope on local stability on labels of all categories (in a hypothetical scenario where local labels are easier to obtain than global labels) and use both the adversarial and the auxiliary setup in order to test the influence of suppressing and promoting accessibility of information relevant for local stability in the encoded representation, respectively. In FIG6, the of both adversarial and auxiliary training are compared to the baseline of λ " 0, which is equivalent to the analytic stethoscope setup. FIG6 shows that adversarial training does indeed partly remove the bias and significantly increases the performance of the main network on hard scenarios while maintaining its high accuracy on easy scenarios. The bias is removed more and more with increasing magnitude of the weighting factor λ up to a point where further increasing λ jeopardises the performance on the main task as the encoder puts more focus on confusing the stethoscope than on the main task (in our experiments this happens at λ « 10 1). Interestingly, the correlation of local stability and global stability prediction rises at this point as for pushing the stethoscope below 50% (random guessing), the main network has to extract information about local stability. Naturally, the performance of the stethoscope continuously decreases with increasing λ as the encoder puts more and more focus on confusing the stethoscope. This scenario could also be seen from the perspective of feeding additional information into the network, which could profit from more diverse training data. However, FIG6 shows that naively using an auxiliary setup to train the network on local stability worsens the bias. With increasing λ and increasing performance of the stethoscope, the network slightly improves its performance on easy scenarios while accuracy deteriorates on hard scenarios. These observations are readily explained: local and global stability are perfectly correlated in the easy scenarios, i.e., the (global) training data. The network is essentially free to chose whether to select local or global features when predicting global stability. Auxiliary training on local stability further shifts the focus to local features. When tested on hard scenarios, where local and global stability are inversely correlated, the network will therefore perform worse when it has learned to rely on local features. We study the state-of-the-art approach for stability prediction of block towers and test its physical understanding. To that end, we create a new dataset and introduce the framework of neural stethoscopes unifying multiple threads of work in machine learning related to analytic, auxiliary and adversarial probing of neural networks. The analytic application of stethoscopes allows measuring relationships between different prediction tasks. We show that the network trained on stability prediction also obtains a more fine-grained physical understanding of the scene (origin of instability) but at the same time is susceptible to potentially misleading visual cues (i.e., local stability). In addition to the analysis, the auxiliary and adversarial modes of the stethoscopes are used to support beneficial complementary information (origin of instability) and suppress harmful nuisance information (visual cues) without changing the network architecture of the main predictor. This yields substantial performance gains in unfavourable training conditions where data is biased or labels are partially unavailable. We encourage the use of neural stethoscopes for other application scenarios in the future as a general tool to analyse task relationships and suppress or promote extraction of specific features. This can be done by collecting additional labels or using existing multi-modal data. This is a detailed explanation of how to connect the first layer of the stethoscope with any layer of the main network as described in Section 3. The goal is to guarantee equal capacity of the stethoscope regardless of the dimensions of the layer it is connected to in order to obtain comparable . Let's call the flattened feature vector from the main network x. In the following we describe how to compute the sparse matrix M which reduces the full vector x to a smaller vector x 1 while adding biases: DISPLAYFORM0 M is a sparse matrix with a constant number of non-zero entries (n_non_zero) which are trainable. The hyperparameter n_non_zero, which determines the capacity of this additional linear layer, is chosen as an integer multiple of the length of x 1 and has to be greater or equal than the number of features in any of the layers of the main network which the stethoscope is attached to. The matrix M is generated in a way so that it fulfils the following two guarantees:• Each input is connected at least once. I.e., no column only contains zeroes.• Each output is connected an equal amount of times. I.e., all rows have the same number of non-zero entries. FIG6 shows that adversarial training increases performance given a biased training dataset. While hypothetically the network could learn to completely discard information about local stability when trying to confuse the adversarial stethoscope, it could also simply reduce its accessibility. In an additional experiment FIG7, where the main network (blue line) is fixed after adversarial training for 120 epochs (grey dashed line), the performance of the stethoscope on local stability (orange) is fully recovered to the level of the baseline for a purely analytic stethoscope (dashed orange line). We therefore argue that the information is not removed, but instead made less accessible to the stethoscope by constantly shifting the representation as explained in BID15. However, the ing performance increase for the main task indicates that this behaviour also makes information related to local stability less accessible to the decoder of the main network. Hence, although the adversarial setup does not reach final, stable equilibria (cf. BID15), it decreases the dependence of the decoder on features particular to local stability. Hence, further improvements for stabilising GAN training could additionally improve performance in adversarial stethoscope use-cases BID15 BID7 BID2. This section introduces two intuitive examples and serves to illustrate the challenge with biased training datasets. We conduct two toy experiments built on the MNIST dataset to demonstrate the issue of varying correlation between nuisance parameters and ground truth labels in training and test datasets and to demonstrate the use of an adversarial stethoscope to suppress this information and mitigate overfitting. Both experiments build on the provision of additional hints as network input. In the first experiment, the hint is a separately handled input in the form of a one-hot vector whereas in the second experiment, the hint is directly incorporated in the pixel space of the MNIST images. In both experiments, the network is trained on the loss as defined in Equation and the gradients are used to update the weights of the different parts of the network (encoder(s), decoder, stethoscope) as defined in Section 3. Hence, λ ă 0 denotes adversarial training and λ ą 0 auxiliary training. Hint as Separate One-Hot Vector In the first example, the hint labels have the same format as the true labels of the input image and we run experiments with varying correlation between hint and true label in the training set. Given high correlation, the network can learn an identity transformation between hint and output and ignore the image. Now, at test time, the hint is completely independent of the true label and hence contains no useful information. The experiment mimics challenges with biased datasets as the supplemental task s P S (see Section 3) has labels y s which are correlated to the labels y of the main objective during training but not at test time (p train py s |yq ‰ p test py s |yq). Hence, an approximation of the correlations in the training data will not generalise to data during deployment. To that end, we introduce a parameter q h denoting the quality of the hints (i.e. their correlation with the true labels) during training time where for q h " 1.0, the hints are always correct (during training), for q h " 0.5, the hints are correct for 50% of the training examples and so on. Varying the hint quality enables us to investigate biases of increasing strength. In this setup FIG8 ), both input streams are separated via independent encoders. Let h enc1 θ and h enc2 θ respectively denote image and hint encoders. The stethoscope only accesses the hint encoding in this example, which simplifies its task and enables us to demonstrate the effect of active stethoscopes without affecting the image encoding. The hint encoder multiplies the one-hot hint vector with a scalar weight. Hence, a weight close to zero would effectively hide the hint information from the main network, explicitly making the task of the stethoscope h ψ S more difficult. The image encoder consists of two convolutional layers followed by two fully connected layers with 256 units where each layer is followed by leaky ReLU activations. The decoder h dec θ, which acts as a digit classifier, is comprised of a single fully connected layer and receives the image encoding concatenated with the encoded hint vector as input. Figure 9: Influence of adversarial training on classifier performance for toy experiment variant 1 described in FIG8. Each curve represents the for different hint qualities h q. The bold lines indicate the mean of 100 runs and the shaded areas mark the student-t 1 σ confidence intervals. In (a) the network is evaluated against the ground truth. In (b), the output of the network is compared to the hint labels during test time. Hence, a high accuracy in (b) shows strong overfitting while a high accuracy in (a) shows that the network learned the actual task of classifying digits from images. With adversarial training of the stethoscope, the encoder learns to suppress information about the hints forcing the classifier to purely rely on image data. As can be observed in Figure 9b, for perfect hints (q h " 1.0) and no adversarial training, the decoder is highly accurate in predicting the hint (instead of the actual label), suggesting that the classifier is effectively independent of the input image and purely relies on the hint. With adversarial training, however, we can indeed force the encoder to hide the hint, therefore forcing it to learn to classify digits from images. As expected, with increasing hint quality, we need to increase the weight of adversarial training so as to discourage the encoder from relying on the hint. FIG0: Influence of adversarial training on classifier performance for the second MNIST experiment. The adversarial training is less effective as both the stethoscope and the main network directly access a single encoding. High adversarial weights can strongly degrade performance on the main task as a trivial solution would be to suppress all information about the input. Each curve represents the for different hint qualities h q. The bold lines indicate the mean of 20 runs and the shaded areas mark the student-t 1 σ confidence intervals. In the second variant of the MNIST experiment, hints are provided at training time as a set of high intensity pixels in the input images as opposed to the explicit concatenation in the previous example. This better reflects the typical scenario where the goal is to disentangle the nuisance hints from the informative information related to the main task. Here, the main network is a two-layer multi-layer perceptron (MLP) with the stethoscope attached after the first hidden layer. The simpler architecture (compared to the one used in the first toy problem) was chosen in order to lower the performance on vanilla MNIST classification in order to see clearer effects. Note that in this setup, giving the same labels to the hints as for the main task in the training scenario makes the two tasks indistinguishable (in particular for perfect hint quality h q " 1.0). We therefore introduce a higher number of hint classes with 100 different hints, each of them related to a single digit in a many-to-one mapping, such that theoretical suppression of hint information is possible without degrading main performance. FIG0 shows the digit classification accuracy for the main network where the stethoscope is applied with weights ranging in both the positive and negative directions to reflect auxiliary and adversarial use of the stethoscope respectively. When using a positive stethoscope loss, which encourages h enc θ to focus more on the artificial hints over the actual digit features, the test accuracy degrades. This is expected for this toy example as the hints have zero correlation with the digit labels at test time. In the negative case, h enc θ serves as an adversary to the stethoscope effectively benefiting the main objectives by removing the misleading hint information leading to an improved test accuracy. However, as the magnitude of the adversarial strength increases the gains quickly erode as the encoder begins to suppress information relevant for the main task. | Combining auxiliary and adversarial training to interrogate and help physical understanding. | 1,225 | scitldr |
Flow based models such as Real NVP are an extremely powerful approach to density estimation. However, existing flow based models are restricted to transforming continuous densities over a continuous input space into similarly continuous distributions over continuous latent variables. This makes them poorly suited for modeling and representing discrete structures in data distributions, for example class membership or discrete symmetries. To address this difficulty, we present a normalizing flow architecture which relies on domain partitioning using locally invertible functions, and possesses both real and discrete valued latent variables. This Real and Discrete (RAD) approach retains the desirable normalizing flow properties of exact sampling, exact inference, and analytically computable probabilities, while at the same time allowing simultaneous modeling of both continuous and discrete structure in a data distribution. Latent generative models are one of the prevailing approaches for building expressive and tractable generative models. The generative process for a sample x can be expressed as DISPLAYFORM0 where z is a noise vector, and g a parametric generator network (typically a deep neural network). This paradigm has several incarnations, including variational autoencoders , generative adversarial networks , and flow based models BID0 BID9 BID5; BID3 ).The training process and model architecture for many existing latent generative models, and for all published flow based models, assumes a unimodal smooth distribution over latent variables z. Given the parametrization of g as a neural network, the mapping to x is a continuous function. This imposed structure makes it challenging to model data distributions with discrete structure -for instance, multi-modal distributions, distributions with holes, distributions with discrete symmetries, or distributions that lie on a union of manifolds (as may approximately be true for natural images, see BID11 . Indeed, such cases require the model to learn a generator whose input Jacobian has highly varying or infinite magnitude to separate the initial noise source into different clusters. Such variations imply a challenging optimization problem due to large changes in curvature. This shortcoming can be critical as several problems of interest are hypothesized to follow a clustering structure, i.e. the distributions is concentrated along several disjoint connected sets .A standard way to address this issue has been to use mixture models BID16; ) or structured priors . In order to efficiently parametrize the model, mixture models are often formulated as a discrete latent variable models (; BID4 ; van den Oord model (1c, 1d). Note the dependency of K on Z in 1d. While this is not necessary, we will exploit this structure as highlighted later in the main text and in Figure 4. et al., 2017), some of which can be expressed as a deep mixture model BID10 BID14 BID13. Although the ing exponential number of mixture components with depth in deep mixture models is an advantage in terms of expressivity, it is an impediment to inference, evaluation, and training of such models, often requiring as a the use of approximate methods like hard-EM or variational inference .In this paper we combine piecewise invertible functions with discrete auxiliary variables, selecting which invertible function applies, to describe a deep mixture model. This framework enables a probabilistic model's latent space to have both real and discrete valued units, and to capture both continuous and discrete structure in the data distribution. It achieves this added capability while preserving the exact inference, exact sampling, exact evaluation of log-likelihood, and efficient training that make standard flow based models desirable. We aim to learn a parametrized distribution p X (x) on the continuous input domain R d by maximizing log-likelihood. The major obstacle to training an expressive probabilistic model is typically efficiently evaluating log-likelihood. If we consider a mixture model with a large number |K| of components, the likelihood takes the form DISPLAYFORM0 In general, evaluating the likelihood requires computing probabilities for all |K| components. However, following a strategy similar to , if we partition the domain DISPLAYFORM1, we can write the likelihood as DISPLAYFORM2 This transforms the problem of summation to a search problem x → f K (x). This can be seen as the inferential converse of a stratified sampling strategy BID8. The proposed approach will be a direct extension of flow based models BID6 BID5 ). Flow based models enable log-likelihood (a) An example of a trimodal distribution pX, sinusoidal distribution. The different modes are colored in red, green, and blue.(b) The ing unimodal distribution pZ, corresponding to the distribution of any of the initial modes in pX.(c) An example fZ (x) of a piecewise invertible function aiming at transforming pZ into a unimodal distribution. The red, green, and blue zones corresponds to the different modes in input space. Figure 2: Example of a trimodal distribution (2a) turned into a unimodal distribution (2b) using a piecewise invertible function (2c). Note that the initial distribution p X correspond to an unfolding of DISPLAYFORM0 evaluation by relying on the change of variable formula DISPLAYFORM1 with f Z a parametrized bijective function from R d onto R d and ∂f Z ∂x T the absolute value of the determinant of its Jacobian. As also proposed in , we relax the constraint that f Z be bijective, and instead have it be surjective onto R d and piecewise invertible. That is, we require f Z|A k (x) be an invertible function, where DISPLAYFORM2 we can define the following generative process: DISPLAYFORM3 If we use the set identification function f K associated with A k, the distribution corresponding to this stochastic inversion can be defined by a change of variable formula DISPLAYFORM4 Because of the use of both Real and Discrete stochastic variables, we call this class of model RAD.The particular parametrization we use on is depicted in Figure 2. We rely on piecewise invertible functions that allow us to define a mixture model of repeated symmetrical patterns, following a Figure 4: Illustration of the expressive power the gating distribution p K|Z provides. By capturing the structure in a sine wave in p K|Z, the function z, k → x can take on an extremely simple form, corresponding only to a linear function with respect to z. DISPLAYFORM5 method of folding the input space. Note that in this instance the function f K is implicitly defined by f Z, as the discrete latent corresponds to which invertible component of the piecewise function x falls on. So far, we have defined a mixture of |K| components with disjoint support. However, if we factorize p Z,K as p Z · p K|Z, we can apply another piecewise invertible map to Z to define p Z as another mixture model. Recursively applying this method in a deep mixture model (see FIG2).Another advantage of such factorization is in the gating network p K|Z, as also designated in (van den BID13 . It provides a more constrainted but less sample wasteful approach than rejection sampling BID1 by taking into account the untransformed sample z before selecting the mixture component k. This allows the model to exploit the distribution p Z in different regions A k in more complex ways than repeating it as a patternm as illustrated in Figure 4 .The function from the input to the discrete variables, f K (x), contains discontinuities. This presents the danger of introducing discontinuities into log p X (x), making optimization more difficult. However, by carefully imposing boundary conditions on the gating network, we are able to exactly counteract the effect of discontinuities in f K, and cause log p X (x) to remain continuous with respect to the parameters. This is discussed in detail in Appendix A. We conduct a brief comparison on six two-dimensional toy problems with REAL NVP to demonstrate the potential gain in expressivity RAD models can enable. Synthetic datasets of 10, 000 points each are constructed following the manifold hypothesis and/or the clustering hypothesis. We designate these problems as: grid Gaussian mixture, ring Gaussian mixture, two moons, two circles, spiral, and many moons (see FIG3). For the RAD model implementation, we use the piecewise linear activations defined in Appendix A in a coupling layer architecture BID5 for f Z where, instead of a conditional linear transformation, the conditioning variable x 1 determines the parameters of the piecewise linear activation on x 2 to obtain z 2 and k 2, with z 1 = x 1 (see FIG4). For the gating network p K|Z, the gating logit neural network s (z) take as input z = (z 1, z 2). We compare with a REAL NVP model using only affine coupling layers. p Z is a standard Gaussian distribution. As both these models can easily approximately solve these generative modeling tasks provided enough capacity, we study these model in a relatively low capacity regime, where we can showcase the potential expressivity RAD may provide. Each of these models uses six coupling layers, and each coupling layer uses a one-hidden-layer rectified network with a tanh output activation scaled by a scalar parameter as described in. For RAD, the logit network s (·) also uses a one-hidden-layer rectified neural network, but with linear output. In order to fairly compare with respect to number of parameters, we provide REAL NVP seven times more hidden units per (e) REAL NVP on spiral.(f) REAL NVP on many moons.(g) RAD on grid Gaussian mixture.(h) RAD on ring Gaussian mixture.(i) RAD on two moons.(j) RAD on two circles.(k) RAD on spiral.(l) RAD on many moons. Figure 7: Comparison of samples from trained REAL NVP (top row) (a-f) and RAD (bottow row) (g-l) models. REAL NVP fails in a low capacity setting by attributing probability mass over spaces where the data distribution has low density. Here, these spaces often connect data clusters, illustrating the challenges that come with modeling multimodal data as one continuous manifold.hidden layer than RAD, which uses 8 hidden units per hidden layer. For each level, p K|Z and f Z are trained using stochastic gradient ascent with ADAM on the log-likelihood with a batch size of 500 for 50, 000 steps. In each of these problems, RAD is consistently able to obtain higher log-likelihood than REAL NVP. We plot the samples (Figure 7) of the described RAD and REAL NVP models trained on these problems. In the described low capacity regime, REAL NVP fails by attributing probability mass over spaces where the data distribution has low density. This is consistent with the mode covering behavior of maximum likelihood. However, the particular inductive bias of REAL NVP is to prefer modeling the data as one connected manifold. This in the unwanted probability mass being distributed along the space between clusters. Flow-based models often follow the principle of Gaussianization , i.e. transforming the data distribution into a Gaussian. The inversion of that process on a Gaussian distribution would therefore approximate the data distribution. We plot in FIG7 the inferred Gaussianized variables z for both models trained on the ring Gaussian mixture problem. The Gaussianization from REAL NVP leaves some area of the standard Gaussian distribution unpopulated. These unattended areas correspond to unwanted regions of probability mass in the input space. RAD suffers significantly less from this problem. An interesting feature is that RAD seems also to outperform REAL NVP on the spiral dataset. One hypothesis is that the model successfully exploits some non-linear symmetries in this problem. In RAD several points which were far apart in the input space become neighbors in z. This is not the case for REAL NVP. We take a deeper look at the Gaussianization process involved in both models. In FIG8 we plot the inference process of z from x for both models trained on the two moons problem. As a of a folding process similar to that in , several points which were far apart in the input space become neighbors in z in the case of RAD.We further explore this folding process using the visualization described in FIG1. We verify that the non-linear folding process induced by RAD plays at least two roles: bridging gaps in the distribution of probability mass, and exploiting symmetries in the data. We observe that in the case of the ring Gaussian mixture FIG1 ), RAD effectively uses foldings in order to bridge the different modes of the distribution into a single mode, primarily in the last layers of the transformation. We contrast this with REAL NVP FIG1 ) which struggles to combine these modes under the standard Gaussian distribution using bijections. In the spiral problem FIG1 ), RAD decomposes the spiral into three different lines to bridge FIG1 ) instead of unrolling the manifold fully, which REAL NVP struggles to do FIG1 ).In both cases, the points remain generally well separated by labels, even after being pushed through a RAD layer FIG1. This enables the model to maximize the conditional log-probability log(p K|Z). We introduced an approach to tractably evaluate and train deep mixture models using piecewise invertible maps as a folding mechanism. This allows exact inference, exact generation, and exact evaluation of log-likelihood, avoiding many issues in previous discrete variables models. This method can easily be combined with other flow based architectural components, allowing flow based models to better model datasets with discrete as well as continuous structure. Figure 11: RAD and REAL NVP inference processes on the ring Gaussian mixture problem. Each column correspond to a RAD or affine coupling layer. RAD effectively uses foldings in order to bridge the multiple modes of the distribution into a single mode, primarily in the last layers of the transformation, whereas REAL NVP struggles to bring together these modes under the standard Gaussian distribution using continuous bijections. A CONTINUITYThe standard approach in learning a deep probabilistic model has been stochastic gradient descent on the negative log-likelihood. Although the model enables the computation of a gradient almost everywhere, the log-likelihood is unfortunately discontinuous. Let's decompose the log-likelihood DISPLAYFORM0 There are two sources of discontinuity in this expression: f K is a function with discrete values (therefore discontinuous) and ∂f Z ∂x T is discontinuous because of the transition between the subsets A k, leading to the expression of interest DISPLAYFORM1 which takes a role similar to the log-Jacobian determinant, a pseudo log-Jacobian determinant. Let's build from now on the simple scalar case and a piecewise linear function DISPLAYFORM2 In this case, s(z) = log p K|Z k | z k≤N + C1 |K| can be seen as a vector valued function. We can attempt at parametrizing the model such that the pseudo log-Jacobian determinant becomes continuous with respect to β by expressing the boundary condition at x = β DISPLAYFORM3 ⇒s(−α 2 β) 2 + log(α 2) = s(−α 2 β) 3 + log(α 3). DISPLAYFORM4 − log(α 1), log(α 2), log(α 3) + β 2 1 + cos (zα DISPLAYFORM5 Another type of boundary condition can be found at between the non-invertible area and the invertible area z = α 2 β, as ∀z > α 2 β, p 3|Z (3 | z) = 1, therefore DISPLAYFORM6 Since the condition ∀k < 3, p K|Z k | z) → 0 when z → (α 2 β) − will lead to an infinite loss barrier at x = −β, another way to enforce this boundary condition is by adding linear pieces FIG1 ): DISPLAYFORM7 The inverse is defined as DISPLAYFORM8 In order to know the values of s at the boundaries ±α 2 β, we can use the logit function DISPLAYFORM9 Given those constraints, the model can then be reliably learned through gradient descent methods. Note that the ing tractability of the model from the fact that the discrete variables k is only interfaced during inference with the distribution p K|Z, unlike discrete variational autoencoders approaches (; BID15 where it is fed to a deep neural network. Similar to BID7, the learning of discrete variables is achieved by relying on the the continuous component of the model, and, as opposed as other approaches (; BID12 ; BID12, this gradient signal extracted is exact and closed form. We plot the remaining inference processes of RAD and REAL NVP on the remaining problems not plotted previously: grid Gaussian mixture FIG1, two circles FIG1), two moons FIG1, and many moons FIG1 ). We also compare the final of the Gaussianization processes on both models on the different toy problems in FIG1. (e) REAL NVP on spiral.(f) REAL NVP on many moons.(g) RAD on grid Gaussian mixture.(h) RAD on ring Gaussian mixture.(i) RAD on two moons.(j) RAD on two circles.(k) RAD on spiral.(l) RAD on many moons. FIG1: Comparison of the Gaussianization from the trained REAL NVP (top row) (a-f) and RAD (bottow row) (g-l). REAL NVP fails in a low capacity setting by leaving unpopulated areas where the standard Gaussian attributes probability mass. Here, these spaces as often ones separating clusters, showing the failure in modeling the data as one manifold. | Flow based models, but non-invertible, to also learn discrete variables | 1,226 | scitldr |
We investigate the loss surface of neural networks. We prove that even for one-hidden-layer networks with "slightest" nonlinearity, the empirical risks have spurious local minima in most cases. Our thus indicate that in general "no spurious local minim" is a property limited to deep linear networks, and insights obtained from linear networks may not be robust. Specifically, for ReLU(-like) networks we constructively prove that for almost all practical datasets there exist infinitely many local minima. We also present a counterexample for more general activations (sigmoid, tanh, arctan, ReLU, etc.), for which there exists a bad local minimum. Our make the least restrictive assumptions relative to existing on spurious local optima in neural networks. We complete our discussion by presenting a comprehensive characterization of global optimality for deep linear networks, which unifies other on this topic. Neural network training reduces to solving nonconvex empirical risk minimization problems, a task that is in general intractable. But success stories of deep learning suggest that local minima of the empirical risk could be close to global minima. BID5 use spherical spin-glass models from statistical physics to justify how the size of neural networks may in local minima that are close to global. However, due to the complexities introduced by nonlinearity, a rigorous understanding of optimality in deep neural networks remains elusive. Initial steps towards understanding optimality have focused on deep linear networks. This area has seen substantial recent progress. In deep linear networks there is no nonlinear activation; the output is simply a multilinear function of the input. BID1 prove that some shallow networks have no spurious local minima, and extends this to squared error deep linear networks, showing that they only have global minima and saddle points. Several other works on linear nets have also appeared (; ; ; ; a; b).The theory of nonlinear neural networks (which is the actual setting of interest), however, is still in its infancy. There have been attempts to extend the "local minima are global" property from linear to nonlinear networks, but recent suggest that this property does not usually hold . Although not unexpected, rigorously proving such turns out to be non-trivial, forcing several authors (e.g., ; BID8 ;) to make somewhat unrealistic assumptions (realizability and Gaussianity) on data. In contrast, we prove existence of spurious local minima under the least restrictive (to our knowledge) assumptions. Since seemingly subtle changes to assumptions can greatly influence the analysis as well as the applicability of known , let us first summarize what is known; this will also help provide a better intuitive perspective on our (as the technical details are somewhat involved). There is a large and rapidly expanding literature of optimization of neural networks. Some works focus on the loss surface BID1;;;;;; Safran & Shamir, 1.2 CONTRIBUTIONS AND SUMMARY OF We summarize our key contributions more precisely below. Our work encompasses for both nonlinear and linear neural networks. First, we study whether the "local minima are global" property holds for nonlinear networks. Unfortunately, our here are negative. Specifically, we prove For piecewise linear and nonnegative homogeneous activation functions (e.g., ReLU), we prove in Theorem 1 that if linear models cannot perfectly fit the data, one can construct infinitely many local minima that are not global. In practice, most datasets are not linearly fittable, hence this gives a constructive proof of spurious local minima for generic datasets. In contrast, several existing either provide only one counterexample , or make restrictive assumptions of realizability (; BID8 or linear separability (a). This is presented in Section 2.In Theorem 2 we tackle more general nonlinear activation functions, and provide a simple architecture (with squared loss) and dataset, for which there exists a local minimum inferior to the global minimum for a realizable dataset. Our analysis applies to a wide range of activations, including sigmoid, tanh, arctan, ELU BID6, SELU , and ReLU. Considering that realizability of data simplifies the analysis and ensures zero loss at global optima, our counterexample that is realizable and yet has a spurious local minimum is surprising, suggesting that the situation is likely worse for non-realizable data. See Section 3 for details. We complement our negative by presenting the following positive on linear networks: Assume that the hidden layers are as wide as either the input or the output, and that the empirical risk ((W j) H+1 j=1 ) equals 0 (W H+1 W H · · · W 1), where 0 is a differentiable loss function and W i is the weight matrix for layer i. Theorem 4 shows if (Ŵ j) H+1 j=1 is a critical point of, then its type of stationarity (local min/max, or saddle) is closely related to the behavior of 0 evaluated at the productŴ H+1 · · ·Ŵ 1. If we additionally assume that any critical point of 0 is a global minimum, Corollary 5 shows that the empirical risk only has global minima and saddles, and provides a simple condition to distinguish between them. To the best of our knowledge, this is the most general on deep linear networks and it subsumes several previous , e.g., (; ; ; b). This is in Section 4.Notation. For an integer a ≥ 1, [a] denotes the set of integers from 1 to a (inclusive). For a vector v, we use [v] i to denote its i-th component, while [v] [i] denotes a vector comprised of the first i components of v. Let 1 (·) (0 (·) ) be the all ones (zeros) column vector or matrix with size (·). We study below whether nonlinear neural networks provably have spurious local minima. We show in §2 and §3 that even for extremely simple nonlinear networks, one encounters spurious local minima. We first consider ReLU and ReLU-like networks. Here, we prove that as long as linear models cannot perfectly fit the data, there exists a local minimum strictly inferior to the global one. Using nonnegative homogeneity, we can scale the parameters to get infinitely many local minima. Consider a training dataset that consists of m data points. The inputs and the outputs are of dimension d x and d y, respectively. We aggregate these items, and write X ∈ R dx×m as the data matrix and Y ∈ R dy×m as the label matrix. Consider the 1-hidden-layer neural network DISPLAYFORM0. We analyze the empirical risk with squared loss DISPLAYFORM1 F. Next, define a class of piecewise linear nonnegative homogeneous functions h s+,s− (x) = max{s + x, 0} + min{s − x, 0},where s + > 0, s − ≥ 0 and s + = s −. Note that ReLU and Leaky-ReLU are members of this class. We use the shorthandX:= X (C1. DISPLAYFORM0 3) The activation function h ish s+,s−.(C1.4) The hidden layer has at least width 2: DISPLAYFORM1 Then, there is a spurious local minimum whose risk is the same as linear least squares model. Moreover, due to nonnegative homogeneity ofh s+,s−, there are infinitely many such local minima. Noticing that most real world datasets cannot be perfectly fit with linear models, Theorem 1 shows that when we use the activationh s+,s−, the empirical risk has bad local minima for almost all datasets that one may encounter in practice. Although it is not very surprising that neural networks have spurious local minima, proving this rigorously is non-trivial. We provide a constructive and deterministic proof for this problem that holds for general datasets, which is in contrast to experimental of. We emphasize that Theorem 1 also holds even for "slightest" nonlinearities, e.g., when s + = 1 + and s − = 1 where > 0 is small. This suggests that the "local min is global" property is limited to the simplified setting of linear neural networks. Existing on squared error loss either provide one counterexample , or assume realizability and Gaussian input (; BID8 . Realizability is an assumption that the output is generated by a network with unknown parameters. In real datasets, neither input is Gaussian nor output is generated by neural networks; in contrast, our holds for most realistic situations, and hence delivers useful insight. There are several proving sufficient conditions for global optimality of nonlinear neural networks (; ;). But they rely on assumptions that the network width scales with the number of data points. For instance, applying Theorem 3.4 of to our network proves that ifX has linearly independent columns and other assumptions hold, then any critical point with W 2 = 0 is a global minimum. However, linearly independent columns already imply row(X) = R m, so even linear models RX can fit any Y; i.e., there is less merit in using a complex model to fit Y. Theorem 1 does not make any structural assumption other than d 1 ≥ 2, and addresses the case where it is impossible to fit Y with linear models, which is much more realistic. It is worth comparing our with Laurent & Brecht (2018a), who use hinge loss based classification and assume linear separability to prove "no spurious local minima" for Leaky-ReLU networks. Their does not contradict our theorem because the losses are different and we do not assume linear separability. One might wonder if our theorem holds even with d 1 ≥ m. showed that onehidden-layer neural networks with d 1 ≥ m doesn't have spurious valleys, hence there is no strict spurious local minima; however, due to nonnegative homogeneity ofh s+,s− we only have non-strict local minima. Based on BID2, one might claim that with wide enough hidden layer and random W 1 and b 1, one can fit any Y; however, this is not the case, by our assumption that linear models RX cannot fit Y. Note that for any d 1, there is a non-trivial region (measure > 0) in the parameter space where entry-wise). In this region, the output of neural networkŶ is still a linear combination of rows ofX, soŶ cannot fit Y; in fact, it can only do as well as linear models. We will see in the Step 1 of Section 2.2 that the bad local minimum that we construct "kills" d 1 − 1 neurons; however, killing many neurons is not a necessity, and it is just to simply the exposition. In fact, any local minimum in the region W 1 X + b 1 1 T m > 0 is a spurious local minimum. DISPLAYFORM2 The proof of the theorem is split into two steps. First, we prove that there exist local minima (Ŵ j,b j) 2 j=1 whose risk value is the same as the linear least squares solution, and that there are infinitely many such minima. Second, we will construct a tuple of parameters (W j,b j) 2 j=1 that has strictly smaller empirical risk than (Ŵ j,b j) 2 j=1.Step 1: A local minimum as good as the linear solution. The main idea here is to exploit the weights from the linear least squares solution, and to tune the parameters so that all inputs to hidden nodes become positive. Doing so makes the hidden nodes "locally linear," so that the constructed (Ŵ j,b j) 2 j=1 that produce linear least squares estimates at the output become locally optimal. Recall thatX = X T 1 m T ∈ R (dx+1)×m, and define a linear least squares loss 0 (R):= DISPLAYFORM0 T be the output of the linear least squares model, and similarlyȲ =WX.Let η:= min {−1, 2 min iȳi}, a negative constant makingȳ i − η > 0 for all i. Define parameterŝ DISPLAYFORM1 where α > 0 is any arbitrary fixed positive constant, [W] [dx] gives the first d x components ofW, and DISPLAYFORM2 (component-wise), given our choice of η. Thus, all hidden node inputs are positive. Moreover, DISPLAYFORM3 So far, we checked that (Ŵ j,b j) 2 j=1 has the same empirical risk as a linear least squares solution. It now remains to show that this point is indeed a local minimum of. To that end, we consider the perturbed parameters (Ŵ j + ∆ j,b j + δ j) 2 j=1, and check their risk is always larger. A useful point is that sinceW is a minimum of 0 (R) = DISPLAYFORM4 DISPLAYFORM5 they are aggregated perturbation terms. We used to obtain the last equality of. Thus, DISPLAYFORM6 is indeed a local minimum of. Since this is true for arbitrary α > 0, there are infinitely many such local minima. We can also construct similar local minima by permuting hidden nodes, etc. Step 2: A point strictly better than the local minimum. The proof of this step is more involved. In the previous step, we "pushed" all the input to the hidden nodes to positive side, and took advantage of "local linearity" of the hidden nodes near DISPLAYFORM7 j=1 is a spurious local minimum), we make the sign of inputs to the hidden nodes different depending on data. To this end, we sort the indices of data points in increasing order ofȳ i; i.e.,ȳ 1 ≤ȳ 2 ≤ · · · ≤ȳ m. Define the set J: DISPLAYFORM8 The remaining construction is divided into two cases: J = ∅ and J = ∅, whose main ideas are essentially the same. We present the proof for J = ∅, and defer the other case to Appendix A2 as it is rarer, and its proof, while instructive for its perturbation argument, is technically too involved. Case 1: J = ∅. Pick any j 0 ∈ J. We can observe that i≤j0 DISPLAYFORM9, so thatȳ i − β < 0 for all i ≤ j 0 andȳ i − β > 0 for all i > j 0. Then, let γ be a constant satisfying 0 < |γ| ≤ȳ j 0 +1−ȳj 0, whose value will be specified later. Since |γ| is small enough, sign(ȳ i − β) = sign(ȳ i − β + γ) = sign(ȳ i − β − γ). Now select parameters DISPLAYFORM0 Similarly, for i > j 0,ȳ i − β + γ > 0 and −ȳ i + β + γ < 0 inŷ i =ȳ i + s+−s− s++s− γ. Here, we push the outputsŷ i of the network by s+−s− s++s− γ fromȳ i, and the direction of the "push" varies depending on whether i ≤ j 0 or i > j 0.The empirical risk for this choice of parameters is DISPLAYFORM1, and choose small |γ| so that DISPLAYFORM2 is a spurious local minimum. The proof of Theorem 1 crucially exploits the piecewise linearity of the activation functions. Thus, one may wonder whether the spurious local minima seen there are an artifact of the specific nonlinearity. We show below that this is not the case. We provide a counterexample nonlinear network and a dataset for which a wide range of nonlinear activations in a local minimum that is strictly inferior to the global minimum with exactly zero empirical risk. Examples of such activation functions include popular activation functions such as sigmoid, tanh, arctan, ELU, SELU, and ReLU.We consider again the squared error empirical risk of a one-hidden-layer nonlinear neural network: DISPLAYFORM0 F, where we fix d x = d 1 = 2 and d y = 1. Also, let h (k) (x) be the k-th derivative of h: R → R, whenever it exists at x. For short, let h and h denote the first and second derivatives. For this network and dataset the following hold: DISPLAYFORM1 at which equals 0. 2. If there exist real numbers v 1, v 2, u 1, u 2 ∈ R such that the following conditions hold: DISPLAYFORM2 such that the output of the network is the same as the linear least squares model, the risk DISPLAYFORM3 j=1 is a local minimum of.Theorem 2 shows that for this architecture and dataset, activations that satisfy (C2.1)-(C2.7) introduce at least one spurious local minimum. Notice that the empirical risk is zero at the global minimum. This means that the data X and Y can actually be "generated" by the network, which satisfies the realizability assumption that others use (; BID8). Notice that our counterexample is "easy to fit," and yet, there exists a local minimum that is not global. This leads us to conjecture that with harder datasets, the problems with spurious local minima could be worse. The proof of Theorem 2 can be found in Appendix A3.Discussion. Note that the conditions (C2.1)-(C2.7) only require existence of certain real numbers rather than some global properties of activation h, hence are not as restrictive as they look. Conditions (C2.1)-(C2.2) come from a choice of tuple (W j,b j) 2 j=1 that perfectly fits the data. Condition (C2.3) is necessary for constructing (Ŵ j,b j) 2 j=1 with the same output as the linear least squares model, and Conditions (C2.4)-(C2.7) are needed for showing local minimality of (Ŵ j,b j) 2 j=1 via Taylor expansions. The class of functions that satisfy conditions (C2.1)-(C2.7) is quite large, and includes the nonlinear activation functions used in practice. The next corollary highlights this observation (for a proof with explicit choices of the involved real numbers, please see Appendix A5). Corollary 3. For the counterexample in Theorem 2, the set of activation functions satisfying conditions (C2.1)-(C2.7) include sigmoid, tanh, arctan, quadratic, ELU, and SELU.Admittedly, Theorem 2 and Corollary 3 give one counterexample instead of stating a claim about generic datasets. Nevertheless, this example shows that for many practical nonlinear activations, the desirable "local minimum is global" property cannot hold even for realizable datasets, suggesting that the situation could be worse for non-realizable ones. Remark: "ReLU-like" activation functions. Recall the piecewise linear nonnegative homogeneous activation functionh s+,s−. They do not satisfy condition (C2.7), so Theorem 2 cannot be directly applied. Also, if s − = 0 (i.e., ReLU), conditions (C2.1)-(C2.2) are also violated. However, the statements of Theorem 2 hold even forh s+,s−, which is shown in Appendix A6. Recalling again s + = 1 + and s − = 1, this means that even with the "slightest" nonlinearity in activation function, the network has a global minimum with risk zero while there exists a bad local minimum that performs just as linear least squares models. In other words, "local minima are global" property is rather brittle and can only hold for linear neural networks. Another thing to note is that in Appendix A6, the bias parameters are all zero, for both (W j,b j) In this section we present our on deep linear neural networks. Assuming that the hidden layers are at least as wide as either the input or output, we show that critical points of the loss with a multilinear parameterization inherit the type of critical points of the loss with a linear parameterization. As a corollary, we show that for differentiable losses whose critical points are globally optimal, deep linear networks have only global minima or saddle points. Furthermore, we provide an efficiently checkable condition for global minimality. Suppose the network has H hidden layers having widths d 1,..., d H. To ease notation, we set DISPLAYFORM0 The weights between adjacent layers are kept in matrices W j ∈ R dj ×dj−1 DISPLAYFORM1, and the outputŶ of the network is given by the product of weight matrices with the data matrix: DISPLAYFORM2 j=1 be the tuple of all weight matrices, and W i:j denote the product W i W i−1 · · · W j+1 W j for i ≥ j, and the identity for i = j − 1. We consider the empirical risk ((W j) H+1 j=1 ), which, for linear networks assumes the form DISPLAYFORM3 where 0 is a suitable differentiable loss. For example, when 0 (R) = DISPLAYFORM4 Remark: bias terms. We omit the bias terms b 1,..., b H+1 here. This choice is for simplicity; models with bias can be handled by the usual trick of augmenting data and weight matrices. We are now ready to state our first main theorem, whose proof is deferred to Appendix A7. Theorem 4. Suppose that for all j, d j ≥ min{d x, d y}, and that the loss is given by, where 0 is differentiable on R dy×dx. For any critical point (Ŵ j) H+1 j=1 of the loss, the following claims hold: DISPLAYFORM0 j=1 is a saddle of. DISPLAYFORM1 j=1 is a local min (max) of ifŴ H+1:1 is a local min (max) of 0; moreover, DISPLAYFORM2 j=1 is a global min (max) of if and only ifŴ H+1:1 is a global min (max) of 0.3. If there exists j * ∈ [H + 1] such thatŴ H+1:j * +1 has full row rank andŴ j * −1:1 has full column rank, then ∇ 0 (Ŵ H+1:1) = 0, so 2(a) and 2(b) hold. Also, DISPLAYFORM3 j=1 is a local min (max) of.Let us paraphrase Theorem 4 in words. In particular, it states that if the hidden layers are "wide enough" so that the product W H+1:1 can attain full rank and if the loss assumes the form for a differentiable loss 0, then the type (optimal or saddle point) of a critical point (Ŵ j) H+1 j=1 of is governed by the behavior of 0 at the productŴ H+1:1.Note that for any critical point (Ŵ j) H+1 j=1 of the loss, either ∇ 0 (Ŵ H+1:1) = 0 or ∇ 0 (Ŵ H+1:1) = 0. Parts 1 and 2 handle these two cases. Also observe that the condition in Part 3 implies ∇ 0 = 0, so Part 3 is a refinement of Part 2. A notable fact is that a sufficient condition for Part 3 isŴ H+1:1 having full rank. For example, if d x ≥ d y, full-rankŴ H+1:1 implies rank(Ŵ H+1:2) = d y, whereby the condition in Part 3 holds with j * = 1.IfŴ H+1:1 is not critical for 0, then (Ŵ j) H+1 j=1 must be a saddle point of. IfŴ H+1:1 is a local min/max of 0, (Ŵ j) H+1 j=1 is also a local min/max of. Notice, however, that Part 2(a) does not address the case of saddle points; whenŴ H+1:1 is a saddle point of 0, the tuple (Ŵ j) H+1 j=1 can behave arbitrarily. However, with the condition in Part 3, statements 2(a) and 3(a) hold at the same time, so thatŴ H+1:1 is a local min/max of 0 if and only if (Ŵ j) H+1 j=1 is a local min/max of. Observe that the same "if and only if" statement holds for saddle points due to their definition; in summary, the types (min/max/saddle) of the critical points (Ŵ j) H+1 j=1 andŴ H+1:1 match exactly. Although Theorem 4 itself is of interest, the following corollary highlights its key implication for deep linear networks. Corollary 5. In addition to the assumptions in Theorem 4, assume that any critical point of 0 is a global min (max). For any critical point (Ŵ j) Corollary 5 shows that for any differentiable loss function 0 whose critical points are global minima, the loss has only global minima and saddle points, therefore satisfying the "local minima are global" property. In other words, for such an 0, the multilinear re-parametrization introduced by deep linear networks does not introduce any spurious local minima/maxima; it only introduces saddle points. Importantly, Corollary 5 also provides a checkable condition that distinguishes global minima from saddle points. Since is nonconvex, it is remarkable that such a simple necessary and sufficient condition for global optimality is available. DISPLAYFORM4 Our generalizes previous works on linear networks such as Kawaguchi FORMULA2;; , because it provides conditions for global optimality for a broader range of loss functions without assumptions on datasets. Laurent & Brecht (2018b) proved that if DISPLAYFORM5 j=1 is a local min of, thenŴ H+1:1 is a critical point of 0. First, observe that this is implied by Theorem 4.1. So our , which was proved in parallel and independently, is strictly more general. With additional assumption that critical points of 0 are global minima, Laurent & Brecht (2018b) showed that "local min is global" property holds for linear neural networks; our Corollay 5 gives a simple and efficient test condition as well as proving there are only global minima and saddles, which is clearly stronger. We investigated the loss surface of deep linear and nonlinear neural networks. We proved two theorems showing existence of spurious local minima on nonlinear networks, which apply to almost all datasets (Theorem 1) and a wide class of activations (Theorem 2). We concluded by Theorem 4, showing a general studying the behavior of critical points in multilinearly parametrized functions, which unifies other existing on linear neural networks. Given that spurious local minima are common in neural networks, a valuable future research direction will be investigating how far local minima are from global minima in general, and how the size of the network affects this gap. Another thing to note is that even though we showed the existence of spurious local minima in the whole parameter space, things can be different in restricted sets of parameter space (e.g., by adding regularizers). Understanding the loss surface in such sets would be valuable. Additionally, one can try to show algorithmic/trajectory of (stochastic) gradient descent. We hope that our paper will be a stepping stone to such future research. A2 PROOF OF THEOREM 1, STEP 2, CASE 2 Case 2. J = ∅. We start with a lemma discussing what J = ∅ implies. Lemma A.1. If J = ∅, the following statements hold:1. There are someȳ j's that are duplicate; i.e. for some i = j,ȳ i =ȳ j.2. Ifȳ j is non-duplicate, meaning thatȳ j−1 <ȳ j <ȳ j+1,ȳ j = y j holds.3. Ifȳ j is duplicate, i:ȳi=ȳj (ȳ i − y i) = 0 holds.4. There exists at least one duplicateȳ j such that, for thatȳ j, there exist at least two different i's that satisfyȳ i =ȳ j andȳ i = y i.Proof We prove this by showing if any of these statements are not true, then we have J = ∅ or a contradiction.1. If all theȳ j's are distinct and J = ∅, by definition of J,ȳ j = y j for all j. This violates our assumption that linear models cannot perfectly fit Y.2. If we haveȳ j = y j for a non-duplicateȳ j, at least one of the following statements must hold: i≤j−1 (ȳ i − y i) = 0 or i≤j (ȳ i − y i) = 0, meaning that j − 1 ∈ J or j ∈ J.3. Supposeȳ j is duplicate and i:ȳi=ȳj (ȳ i − y i) = 0. Let k = min{i |ȳ i =ȳ j} and l = max{i |ȳ i =ȳ j}. Then at least one of the following statements must hold: DISPLAYFORM0 4. Since i:ȳi=ȳj (ȳ i − y i) = 0 holds for any duplicateȳ j, ifȳ i = y i holds for one i then there must be at least two of them that satisfiesȳ i = y i. If this doesn't hold for all duplicatē y i, with Part 2 this means thatȳ j = y j holds for all j. This violates our assumption that linear models cannot perfectly fit Y.From Lemma A.1.4, we saw that there is a duplicate value ofȳ j such that some of the data points i satisfyȳ i =ȳ j andȳ i = y i. The proof strategy in this case is essentially the same, but the difference is that we choose one of such duplicateȳ j, and then choose a vector v ∈ R dx to "perturb" the linear least squares solution [W] [dx] in order to break the tie between i's that satisfiesȳ i =ȳ j andȳ i = y i.We start by defining the minimum among such duplicate valuesȳ * ofȳ j's, and a set of indices j that satisfiesȳ j =ȳ *.ȳ * = min{ȳ j | ∃i = j such thatȳ i =ȳ j andȳ i = y i}, DISPLAYFORM1 Then, we define a subset of J *: DISPLAYFORM2 By Lemma A.1.4, cardinality of J * = is at least two. Then, we define a special index in J * =: DISPLAYFORM3 Index j 1 is the index of the "longest" x j among elements in J * =. Using the definition of j 1, we can partition J * into two sets: DISPLAYFORM4 2 2 }. For the indices in J *, we can always switch the indices without loss of generality. So we can assume that j ≤ j 1 = max J * ≥ for all j ∈ J * ≥ and j > j 1 for all j ∈ J * <.We now define a vector that will be used as the "perturbation" to [W] [dx]. Define a vector v ∈ R dx, which is a scaled version of x j1: DISPLAYFORM5 where the constants g and M are defined to be DISPLAYFORM6 The constant M is the largest x i 2 among all the indices, and g is one fourth times the minimum gap between all distinct values ofȳ i. [dx] by a vector −αv T. where α ∈ will be specified later. Observe that DISPLAYFORM0 Recall that j ≤ j 1 = max J * ≥ for all j ∈ J * ≥ and j > j 1 for all j ∈ J * <. We are now ready to present the following lemma: Lemma A.2. Define DISPLAYFORM1 Proof First observe that, for any x i, |αv DISPLAYFORM2 By definition of g, we have 2g <ȳ j −ȳ i for anyȳ i <ȳ j. Using this, we can see that DISPLAYFORM3 In words, ifȳ i andȳ j are distinct and there is an orderȳ i <ȳ j, perturbation of [W] [dx] by −αv T does not change the order. Also, since v is only a scaled version of x j1, from the definitions of J * ≥ and J * DISPLAYFORM4 It is left to prove the statement of the lemma using case analysis, using the inequalities (A.1), (A.2), and (A.3). For all i's such thatȳ i <ȳ * =ȳ j1, DISPLAYFORM5 Similarly, for all i such thatȳ i >ȳ * =ȳ j2, DISPLAYFORM6 For j ∈ J * ≥ (j ≤ j 1), we knowȳ j =ȳ *, sō DISPLAYFORM7 Also, for j ∈ J * < (j > j 1), DISPLAYFORM8 This finishes the case analysis and proves the first statements of the lemma. One last thing to prove is that i>j1 DISPLAYFORM9 Recall from Lemma A.1.2 that for non-duplicateȳ j, we haveȳ j = y j. Also by Lemma A.1.3 ifȳ j is duplicate, DISPLAYFORM10 Recall the definition of J * = = {j ∈ J * |ȳ j = y j}. For j ∈ J * \J * =,ȳ j = y j. So, DISPLAYFORM11 Recall the definition of j 1 = argmax j∈J * = x j 2. For any other j ∈ J * = \{j 1}, DISPLAYFORM12, where the first ≥ sign is due to definition of j 1, and the second is from Cauchy-Schwarz inequality. Since x j1 and x j are distinct by assumption, they must differ in either length or direction, or both. So, we can check that at least one of "≥" must be strict inequality, so x j1 2 2 > x j, x j1 for all j ∈ J * = \{j 1}. Thus, DISPLAYFORM13 Also, by Lemma A.1.3, DISPLAYFORM14 Wrapping up all the equalities, we can conclude that DISPLAYFORM15 finishing the proof of the last statement. It is time to present the parameters (W j,b j) 2 j=1, whose empirical risk is strictly smaller than the local minimum (Ŵ j,b j) 2 j=1 with a sufficiently small choice of α ∈. Now, let γ be a constant such that DISPLAYFORM16 Its absolute value is proportional to α ∈, which is a undetermined number that will be specified at the end of the proof. Since |γ| is small enough, we can check that sign(ȳ i − αv DISPLAYFORM17 Then, assign parameter values DISPLAYFORM18 With these parameter values,W DISPLAYFORM19 As we saw in Lemma A.2, for i ≤ j 1,ȳ i − αv DISPLAYFORM20 Similarly, for i > j 1,ȳ i − αv DISPLAYFORM21 Now, the squared error loss of this point is DISPLAYFORM22 Recall that DISPLAYFORM23 As seen in the definition of γ (A.4), the magnitude of γ is proportional to α. Substituting (A.4), we can express the loss as DISPLAYFORM24 Recall that v T (x j1 − x j2) > 0 from (A.3). Then, for sufficiently small α ∈, DISPLAYFORM25 DISPLAYFORM26 With these values, we can check thatŶ = , hence perfectly fitting Y, thus the loss DISPLAYFORM27 Given conditions (C2.3)-(C2.7) on v 1, v 2, u 1, u 2 ∈ R, we prove below that there exists a local minimum (Ŵ j,b j) 2 j=1 for which the output of the network is the same as linear least squares model, and its empirical risk is ((Ŵ j,b j) 2 j=1 ) =. Now assign parameter valuesŴ DISPLAYFORM0 With these values we can check thatŶ =. It remains to show that this is indeed a local minimum of. To show this, we apply perturbations to the parameters to see if the risk after perturbation is greater than or equal to ((Ŵ j,b j) 2 j=1 ). Let the perturbed parameters bě DISPLAYFORM1 where δ 11, δ 12, δ 21, δ 22, β 1, β 2, 1, 2, and γ are small real numbers. The next lemma rearranges the terms in ((W j,b j) 2 j=1 ) into a form that helps us prove local minimality of (Ŵ j,b j) 2 j=1. Appendix A4 gives the proof of Lemma A.3, which includes as a byproduct some equalities on polynomials that may be of wider interest. Lemma A.3. Assume there exist real numbers v 1, v 2, u 1, u 2 such that conditions (C2.3)-(C2.5) hold. Then, for perturbed parameters DISPLAYFORM2 where DISPLAYFORM3 + o, for i = 1, 2, and DISPLAYFORM4 + o, and o contains terms that diminish to zero as perturbations vanish. To make the the sum of the last three terms of (A.6) nonnegative, we need to satisfy α 1 ≥ 0 and α 2 3 − 4α 1 α 2 ≤ 0; these inequalities are satisfied for small enough perturbations because of conditions (C2.6)-(C2.7). Thus, we conclude that DISPLAYFORM5 j=1 is a local minimum. The goal of this lemma is to prove that DISPLAYFORM0 where o contains terms that diminish to zero as perturbations decrease. Using the perturbed parameters, DISPLAYFORM1 so the empirical risk can be expressed as DISPLAYFORM2 (A.8) So, the empirical risk (A.8) consists of three terms, one for each training example. By expanding the activation function h using Taylor series expansion and doing algebraic manipulations, we will derive the equation (A.7) from (A.8).Using the Taylor series expansion, we can express h(v 1 + δ 11 + β 1) as DISPLAYFORM3 Using a similar expansion for h(v 2 + δ 21 + β 2), the first term of (A.8) can be written as DISPLAYFORM4 where we used u 1 h(v 1)+u 2 h(v 2) = 1 3. To simplify notation, let us introduce the following function: DISPLAYFORM5 With this new notation t(δ 1, δ 2), after doing similar expansions to the other terms of (A.8), we get DISPLAYFORM6 Before we show the lower bounds, we first present the following lemmas that will prove useful shortly. These are simple yet interesting lemmas that might be of independent interest. Lemma A.4. For n ≥ 2, DISPLAYFORM7 where p n is a polynomial in a and b. All terms in p n have degree exactly n − 2. When n = 2, DISPLAYFORM8 Proof The exact formula for p n (a, b) is as the following: DISPLAYFORM9 Using this, we can check the lemma is correct just by expanding both sides of the equation. The rest of the proof is straightforward but involves some complicated algebra. So, we omit the details for simplicity. Lemma A.5. For n 1, n 2 ≥ 1, DISPLAYFORM10 where q n1,n2 and r n1,n2 are polynomials in a, b, c and d. All terms in q n1,n2 and r n1,n2 have degree exactly DISPLAYFORM11 Proof The exact formulas for q n1,n2 (a, b, d), q n2,n1 (c, d, b), and r n1,n2 (a, b, c, d) are as the following: DISPLAYFORM12 Similarly, we can check the lemma is correct just by expanding both sides of the equation. The remaining part of the proof is straightforward, so we will omit the details. Using Lemmas A.4 and A.5, we will expand and simplify the "cross terms" part and "squared terms" part of (A.9). For the "cross terms" in (A.9), let us split t(δ 1, δ 2) into two functions t 1 and t 2: DISPLAYFORM13 It is easy to check that DISPLAYFORM14 Also, using Lemma A.4, we can see that DISPLAYFORM15 Consider the summation DISPLAYFORM16 We assumed that there exists a constant c > 0 such that |h (n) (v 1)| ≤ c n n!. From this, for small enough perturbations δ 11, δ 12, and β 1, we can see that the summation converges, and the summands converge to zero as n increases. Because all the terms in p n (n ≥ 3) are of degree at least one, we can thus write DISPLAYFORM17 So, for small enough δ 11, δ 12, and β 1, the term DISPLAYFORM18 dominates the summation. Similarly, as long as δ 21, δ 22, and β 2 are small enough, the summation DISPLAYFORM19. In , for small enough perturbations, DISPLAYFORM20 Now, it is time to take care of the "squared terms." We will express the terms as This time, we split t(δ 1, δ 2) in another way, this time into three parts: DISPLAYFORM21 so that t(δ 1, δ 2) = t 3 + t 4 (δ 1) + t 5 (δ 2). DISPLAYFORM22 as seen in (A.10). Next, we have (A.14) when perturbations are small enough. We again used Lemma A.4 in the second equality sign, and the facts that p n1+n2 (·) = o whenever n 1 + n 2 > 2 and that p 2 (·) = 1 2. In a similar way, DISPLAYFORM23 DISPLAYFORM24 Lastly, A.16) where the second equality sign used Lemma A.5 and the third equality sign used the facts that q n1,n2 (·) = o and r n1,n2 (·) = o whenever n 1 + n 2 > 2, and that q 1,1 (·) = 0 and r 1,1 (·) = 1 2. If we substitute (A.13), (A.14), (A.15), and (A.16) into (A.12), DISPLAYFORM25 DISPLAYFORM26 We are almost done. If we substitute (A.10), (A.11), and (A.17) into (A.9), we can get DISPLAYFORM27 which is the equation (A.7) that we were originally aiming to show. For the proof of this corollary, we present the values of real numbers that satisfy assumptions (C2.1)-(C2.7), for each activation function listed in the corollary: sigmoid, tanh, arctan, exponential linear units , scaled exponential linear units .To remind the readers what the assumptions were, we list the assumptions again. For (C2.1)-(C2.2), there exist real numbers v 1, v 2, v 3, v 4 ∈ R such that DISPLAYFORM0 For (C2.3)-(C2.7), there exist real numbers v 1, v 2, u 1, u 2 ∈ R such that the following assumptions hold: DISPLAYFORM1 ).For each function, we now present the appropriate real numbers that satisfy the assumptions. When h is sigmoid, DISPLAYFORM0 Assumptions (C2.1)-(C2.2) are satisfied by DISPLAYFORM1 and assumptions (C2.3)-(C2.7) are satisfied by DISPLAYFORM2 Among them, (C2.4)-(C2.5) follow because sigmoid function is an real analytic function Krantz & Parks (2002 When h is quadratic, assumptions (C2.1)-(C2.2) are satisfied by DISPLAYFORM0 and assumptions (C2.3)-(C2.7) are satisfied by DISPLAYFORM1 In the case of s − > 0, assumptions (C2.1)-(C2.2) are satisfied by DISPLAYFORM2 The rest of the proof can be done in exactly the same way as the proof of Theorem 2.1, provided in Appendix A3.For s − = 0, which corresponds to the case of ReLU, define parameters DISPLAYFORM3 We can check thath DISPLAYFORM4 Assumptions (C2.3)-(C2.6) are satisfied by DISPLAYFORM0 Assign parameter valueŝ DISPLAYFORM1 It is easy to compute that the output of the neural network isŶ =. Now, it remains to show that this is indeed a local minimum of. To show this, we apply perturbations to the parameters to see if the risk after perturbation is greater than or equal to ((Ŵ j,b j) 2 j=1 ). Let the perturbed parameters bě DISPLAYFORM2 where δ 11, δ 12, δ 21, δ 22, β 1, β 2, 1, 2, and γ are small enough real numbers. Using the perturbed parameters, DISPLAYFORM3 so the empirical risk can be expressed as DISPLAYFORM4 Published as a conference paper at ICLR 2019To simplify notation, let us introduce the following function: DISPLAYFORM5 It is easy to check that DISPLAYFORM6 Before we start, note the following partial derivatives, which can be computed using straightforward matrix calculus: DISPLAYFORM0 For Part 1, we must show that if ∇ 0 (Ŵ H+1:1) = 0 then (Ŵ j) j=1 is a saddle point of. Thus, we show that (Ŵ j) H+1 j=1 is neither a local minimum nor a local maximum. More precisely, for each j, let B (W j) be an -Frobenius-norm-ball centered at W j, and H+1 j=1 B (W j) their Cartesian product. We wish to show that for every > 0, there exist tuples (P j) DISPLAYFORM0 To prove (A.18), we exploit ((Ŵ j) H+1 j=1 ) = 0 (Ŵ H+1:1), and the assumption ∇ 0 (Ŵ H+1:1) = 0. The key idea is to perturb the tuple (Ŵ j) H+1 j=1 so that the directional derivative of 0 along P H+1:1 − W H+1:1 is positive. Since 0 is differentiable, if P H+1:1 −Ŵ H+1:1 is small, then DISPLAYFORM1 Similarly, we can show ((Q j) H+1 j=1 ) < ((Ŵ j) H+1 j=1 ). The key challenge lies in constructing these perturbations; we outline our approach below; this construction may be of independent interest too. For this section, we assume that d x ≥ d y for simplicity; the case d y ≥ d x is treated in Appendix A7.2.Since ∇ 0 (Ŵ H+1:1) = 0, col(∇ 0 (Ŵ H+1:1)) ⊥ must be a strict subspace of R dy. Consider ∂ /∂W 1 at a critical point to see that (Ŵ H+1:2) T ∇ 0 (Ŵ H+1:1) = 0, so col(Ŵ H+1:2) ⊆ col(∇ 0 (Ŵ H+1:1)) ⊥ R dy. This strict inclusion implies rank(Ŵ H+1:2) < d y ≤ d 1, so that null(Ŵ H+1:2) is not a trivial subspace. Moreover, null(Ŵ H+1:2) ⊇ null(Ŵ H:2) ⊇ · · · ⊇ null(Ŵ 2). We can split the proof into two cases: null(Ŵ H+1:2) = null(Ŵ H:2) and null(Ŵ H+1:2) = null(Ŵ H:2). Recall that W 1 ∈ R d1×dx. Given R ∈ R dy×dx, we can fill the first d y rows of W 1 with R and let any other entries be zero. For all the other matrices W 2,..., W H+1, we put ones to the diagonal entries while putting zeros to all the other entries. We can check that, by this construction, R = W H+1:1 for this given R. DISPLAYFORM2 dy×dx, we can fill the first d x columns of W H+1 with R and let any other entries be zero. For all the other matrices W 1,..., W H, we put ones to the diagonal entries while putting zeros to all the other entries. By this construction, R = W H+1:1 for given R.Once this fact is given, by ((W j) ). To show thatŴ H+1:1 is a local min of 0, we have to show there exists a neighborhood ofŴ H+1:1 such that, any point R in that neighborhood satisfies 0 (R) ≥ 0 (Ŵ H+1:1). To prove this, we state the following lemma: Lemma A.6. Suppose A:=Ŵ H+1:j * +1 has full row rank and B:=Ŵ j * −1:1 has full column rank. Then, any R satisfying R −Ŵ H+1:1 F ≤ σ min (A)σ min (B) can be decomposed into R = V H+1:1, where DISPLAYFORM3 and V j =Ŵ j for j = j *. Also, V j −Ŵ j F ≤ for all j. Proof Since A:=Ŵ H+1:j * +1 has full row rank and B:=Ŵ j * −1:1 has full column rank, σ min (A) > 0, σ min (B) > 0, and AA T and B T B are invertible. Consider any R satisfying R −Ŵ H+1:1 F ≤ σ min (A)σ min (B). Given the definitions of V j's in the statement of the lemma, we can check the identity that R = V H+1:1 by Therefore, DISPLAYFORM4 · σ min (A)σ min (B) =.The lemma shows that for any R = V H+1:1 satisfying R −Ŵ H+1:1 F ≤ σ min (A)σ min (B), we have 0 (R) = 0 (V H+1:1) = ((V j) H+1 j=1 ) ≥ ((Ŵ j) H+1 j=1 ) = 0 (Ŵ H+1:1). We can prove the local maximum part by a similar argument. | We constructively prove that even the slightest nonlinear activation functions introduce spurious local minima, for general datasets and activation functions. | 1,227 | scitldr |
The tasks that an agent will need to solve often aren’t known during training. However, if the agent knows which properties of the environment we consider im- portant, then after learning how its actions affect those properties the agent may be able to use this knowledge to solve complex tasks without training specifi- cally for them. Towards this end, we consider a setup in which an environment is augmented with a set of user defined attributes that parameterize the features of interest. We propose a model that learns a policy for transitioning between “nearby” sets of attributes, and maintains a graph of possible transitions. Given a task at test time that can be expressed in terms of a target set of attributes, and a current state, our model infers the attributes of the current state and searches over paths through attribute space to get a high level plan, and then uses its low level policy to execute the plan. We show in grid-world games and 3D block stacking that our model is able to generalize to longer, more complex tasks at test time even when it only sees short, simple tasks at train time. Deep reinforcement learning has demonstrated impressive successes in building agents that can solve difficult tasks, e.g. BID20;. However, these successes have mostly been confined to situations where it is possible to train a large number of times on a single known task or distribution of tasks. On the other hand, in some situations, the tasks of interest are not known at training time or are too complex to be completed by uninformed exploration on a sparse set of rewards. In these situations, it may be that the cost of the supervision required to identify the important features of the environment, or to describe the space of possible tasks within it, is not so onerous. Recently several papers have taken this approach, for example Reed & de; BID2; BID22; BID7.If we expect an agent to be able to solve many different kinds of tasks, the representation of the task space is particularly important. In this paper, we impose structure on the task space through the use of attribute sets, a high-level abstraction of the environment state. The form of these are chosen by hand to capture task-relevant concepts, allowing both end goals as well as intermediate sub-tasks to be succinctly represented. As in Reed & de; BID2; BID22, we thus trade extra supervision for generalization. The attributes yield a natural space in which to plan: instead of searching over possible sequences of actions, we instead search over attribute sets. Once the agent learns how its actions affect the environment in terms of its relevant attributes, novel tasks can be solved compositionally by executing a plan consisting of a sequence of transitions between abstract states defined by those attributes. In the experiments below, we will show that in various environments, training only on simple tasks, our agents are able to generalize to novel, more complex tasks. We consider an agent in a Markov environment, i.e. at each time the agent observes the state s and takes action a, which uniquely determines the probability P (s, a, s) of transitioning from s to s. We augment the environment with a map f: S → {ρ} from states to a set of user-defined attributes ρ. We assume that either f is provided or a small set of hand-labeled (s, ρ) pairs are provided in order to learning a mappingf. Hence, the attributes are human defined and constitute a space. Each state is mapped to a set of binary attributes (orange/purple dots). Our semi-parametric model comprises a graph over sets of attributes (e.g. "there is a blue block left of the red block"), with edge weightings according to the probability that a parametric policy network is able to transition between adjacent pairs. The attributes themselves are manually specified, but inferred from the observation through a neural network; and the graph structure and policy are learned during training via random exploration of the environment. Given a goal attribute set (green), we use the graph to find the shortest path (red) to it in attribute space. The policy network then executes the actions at each stage (gold arrows).form of supervision. Here we consider attributes that are sets of binary vectors. These user-specified attributes parameterize the set of goals that can be specified at test time. The model has three parts:1. a neural-net based attribute detectorf, which maps states s to a set of attributes ρ, i.e. ρ = f (s).2. a neural net-based policy π(s, ρ g) which takes a pair of inputs: the current state s and attributes of the goal state ρ g. Its output is a distribution over actions.3. a tabular transition function c π (ρ i, ρ j) that scores the possibility of π(s ρi, ρ j) transiting successfully from ρ i to ρ j in a small number of steps. The transition table keeps track of the transitions seen in training, enabling a transition graph G to be constructed that connects distant pairs of attributes. This high-level attribute graph is then searched at test time to find a path to the goal, with the policy network performing the low-level actions to transition between adjacent attributes. Since our model uses attributes for planning, we desire a property that we will call "ignorability" which says that the probability of being able to transition from ρ i to ρ j should only depend on the attributes ρ i, not the exact state; i.e. P π (f (s t) = ρ j |f (s t)) = P π (f (s t) = ρ j |s t ) 1. To the extent that this condition is violated, then transitions are aliased, and a planned transition may not be achievable by the policy from the particular state s even though it's achievable from other states with the same properties, causing the model to fail to achieve its goal. Note that in the experiments in 4.2, there will be nontrivial aliasing. The first step of training consists of fitting the attribute detectorsf that map states s to attributes ρ. As mentioned above, we assume that there is a set of labeled (state, attribute) examples we can use for fitting this part of the model. Note that these examples are the only supervision given to the model during the entire training procedure. To construct c, the agent samples transitions from the environment, finding the sets of attributes which actually occur in the environment (from the potentially large number of all possible attributes).In the experiments below, we place the agent in a state at random with attributes ρ i. It will then take a random action, or short sequence of actions. These lead to a new state with attributes ρ j, and c(ρ i, ρ j) is incremented. This is repeated many times, building up statistics on possible transitions within attribute space. The ing table represents a transition graph G with vertices given by all the ρ the agent has seen, and an edge between ρ i and ρ j counting the number of times the transition between ρ i and ρ j has occurred. This procedure produces the graph G, but the counts in the graph are not normalized probabilities of transitioning from ρ i to ρ j (as we have never seen any negative samples). Therefore, given a low-level policy π (see Sec. 2.1.3) we can optionally perform a second phase of training to learn the probability that for each (ρ i, ρ j) in the graph, that if ρ i = f (s) then π(s, ρ i) will succeed at transitioning to ρ j. At a random state s with attributes ρ i, we pick ρ j from the set of goals for which c(ρ i, ρ j) > 0 and see if π is able to achieve this goal. While doing this, we keep track of the fraction of attempts for which the policy was successful at this transition and store this probability in c π (ρ i, ρ j). Finally, we need to train a policy network π = π(s, ρ g) to solve simple tasks, i.e. those that require a few actions to move between nearby attribute sets. One way of training π is as an "inverse model" in the style of BID1; BID3. In the first phase of training the graph, suppose we sample each initial state s 0 and an action sequence [a 0, a 1, ..., a k] from the exploration policy, causing the agent to arrive at a new state with attributes ρ 1. We then treat a 0 as the "correct" action for π(s 0, ρ 1) and update its parameters for this target. Alternatively, π can be trained using reinforcement learning. After an initial graph c is constructed, tasks can be chosen by sampling from states nearby the initial or current state properties. Once the model has been built we can use it for planning. That is, given an input state s and target set of attributes ρ T, we find a path [ρ 0, ρ 1, ..., ρ m] on the graph G with ρ 0 = f (s) and ρ m = ρ T maximizing DISPLAYFORM0 The optimal path can be found using Dijkstra's algorithm with a distance metric of − log(c π (ρ i, ρ i+1)). The policy is then used to move along the ing path between attribute set, i.e. we take actions according to a = π(s, ρ 1), then once f (s) = ρ 1, we change to a = π(s, ρ 2) and so on. At each intermediate step, if the current attributes don't match the attributes on the computed path, then a new path is computed using the current attributes as a starting point (or, equivalently, the whole path is recomputed at each step). Hierarchical RL Many researchers have recognized the importance of methods that can divide a MDP into subprocesses BID34 BID24 BID30 BID8. Perhaps the most standard formalism today is the options framework of BID30, which deals with multistep "macro-actions" in the setting of reinforcement learning. Recent works, like BID16, have shown how options can be used with function approximation via deep learning. Our work is also a hierarchical approach to controlling an agent in a Markovian environment. However, the paradigm we consider differs from reinforcement learning: we consider a setup where no reward or supervision is provided other than the (s, ρ(s)) pairs, and show than an agent can learn to decompose a transition between far away ρ, ρ into a sequence of short transitions. If we were to frame the problem as HRL, considering each π(·, ρ) as a macro action 2, in order for the agent to learn to sequence the π(·, ρ i), the environment would need to give reward for the completion of complex tasks, not just simple ones. As opposed to e.g. BID16, where additional human supervision is used to allow exploration in the face of extremely sparse rewards, our goal is to show that adding human supervision to parameterize the task space via attributes allows compositionality through planning. Horde and descendants Our work is related to generalized value functions BID31 in that we have policies parameterized by state and target attributes. In particular, if we used a parameterized model for c, it would be similar to the factored state-goal representation in BID27. Recently, van used human provided attributes as a general value function (GVF) in Ms. Pacman, showing that using a weighted combination of these can lead to higher scores than standard rewards. Although the representation used in that work is similar to the one we use, the motivation in our work is to allow generalization to new tasks; and we use the attributes to plan, rather than just as tools for building a reactive policy. Factored MDP and Relational MDP Our approach is closely related to factored MDP BID5 BID8 BID13. In these works, it is assumed that the environment can be represented by discrete attributes, and that transitions between the attributes by an action can be modeled as a Bayesian network. The value of each attribute after an action is postulated to depend in a known way on attributes from before the action. The present work differs from these in that the attributes do not determine the state and the dependency graph is not assumed to be known. More importantly, the focus in this work is on organizing the space of tasks through the attributes rather than being able to better plan a specific task; and in particular being able to generalize to new, more complex tasks at test time. Our approach is also related to Relational MDP and Object Oriented MDP BID14 BID36 BID9 BID0, where states are described as a set of objects, each of which is an instantiation of canonical classes, and each instantiated object has a set of attributes. Our work is especially related to BID12, where the aim is to show that by using a relational representation of an MDP, a policy from one domain can generalize to a new domain. However, in the current work, the attributes are taken directly as functions of the state, as opposed to defined for object classes, and we do not have any explicit encoding of how objects interact. The model is given some examples of various attributes, and builds a parameterized model that maps into the attributes. The Programmable Agents of BID7 put the notions of objects and attributes (as in relational MDP) into an end-to-end differentiable neural architecture. Our work is similar to this one in that it includes learned mappings from states to attributes (f in our work, detectors in theirs; although we do not learn these end-to-end), and experiments in the setting of manipulating blocks in a physics simulator. On the other hand, our model uses explicit search instead of an end-to-end neural architecture to reason over attributes. Moreover, in BID7, the agent is trained and tested on similar tasks, but the object properties at test are novel; whereas our model is trained on simple tasks but generalizes to complex ones. There is a large literature on quickly adapting to a new learning problem given a set or a history of related learning problems. Our approach in this work shares ideas with the one in BID15, where tasks are augmented with descriptors and featurized. Our attributes correspond to these features. In that work, the coefficients of the task features in a sparse dictionary are used to weight a set of vectors defining the model for the associated task. In our work, the low level actor takes in the task features, but we learn how to transit between sets of features, and plan in that space. Similarly, the task is specified by a feature as an input into a model in BID18, again this corresponds to the way our low-level actor processes its goal. Several recent deep reinforcement learning works have used modular architectures and hierarchy to achieve generalization to new tasks. For example, BID32 uses pre-trained skills for transfer. BID22 uses a meta-controller that selects parameterized skills and analogical supervision on outer-product structured tasks. Our assignments of attributes serves a similar purpose to their analogical supervision, and we use parameterized skills as these works do. However, our "meta-controller" is the search over attributes, rather than a reactive model. In BID2, generalization is achieved through supervision in the form of "policy sketches", which are symbolic representations of the high level steps necessary to complete a given task. The low level steps in executing modules in the sketches are composable. Our work is similar in that high level annotation is used to enable generalization, but the mechanism in this work is different. Note that the approaches in BID2; BID22 are complementary to the one described here; in future work we wish to explore combining them. Semiparametric methods In this work we use an explicit memory of sets of attributes the model has seen. Several previous works have used non-parametric memories for lowering the sample complexity of learning, e.g. BID4 BID25. Like these, we lean on the fact that with a good representation of a state, it can be useful to memorize what to do in given situation (having only done it a small number of times) and explicitly look it up. In our case, the "good representation" is informed by the user-specified attributes. Our approach is also related to BID19, which builds up a multiscale representation of an MDP using Eigenvectors of the transition matrix of the MDP, in the sense that we collect data on possible transitions between attributes in a first phase of training, and then use this knowledge at test time. BID39 and relative judgements BID23. However, all these are static I.I.D settings, in contrast to dynamic agent/environment that this work explores. We evaluate our approach (Attribute Planner, abbreviated to AP) in three different environments. The first two are randomly generated grid-worlds, and the third is a simulation of stacking blocks. In each environment, the goal of the model is to be able to generalize to testing on complex tasks from training on simpler tasks. We compare against baseline policies trained in several ways. These baseline policies take the state and goal as inputs, and use the same neural network architecture as the policy used for the Attribute Planner.1. Reinforcement Learning: Policies trained via reinforcement learning with A3C BID21 or Reinforce. We consider three variants of training: (i) training only with nearby goals (one attribute transition for grid-world; single actions for block stacking); (ii) training on the evaluation tasks; and (iii) training on a curriculum that transitions from nearby goals to evaluation tasks. Policies (ii) and (iii) are trained on full sequences, thus have an inherent advantage over our model, which only sees short sequences during training. 2. Inverse: An inverse model trained in a "supervised" fashion on a dataset of observed trajectories to predict the next action given the state and goal. We train on nearby goals and on longer multi-step tasks. We implemented two types of small 2-D environments in Mazebase BID29, where the worlds are randomly generated for each episode. The action space for each consists of movements in the four cardinal directions, and additional environment specific actions. The first environment consists of four switches, each with four possible colors. An extra toggle action cycles the color of a switch if the agent is standing on it. The attributes for this environment are the states of the switches; and the tasks are to change the switches into a specified configuration, as shown in FIG2. The locations and colors of the switches are randomly initialized for each episode. Crafting In the second environment, similar to the one used in BID2 an agent needs to collect resources and combine them to form items. In addition to moving in the cardinal directions, the agent has a "grab" action that allows it to pick up a resource from the current location and add it to its inventory. If there is no item where the agent is standing, this action does nothing. The agent also has a "craft" action that combines a set of items to create a new item if the agent has the prerequisite items in its inventory and the agent is standing on a special square (a "crafting table") corresponding to the item to be crafted. If these two conditions are not both met, the "craft" action does nothing. The attributes for this environment are the items in the inventory, and task is to add a specified item to the inventory. In the environment, there are three types of resources and three types of products (see FIG2). The episodes are initialized randomly by removing some resources, and adding some items to the inventory. DISPLAYFORM0 Crafting key:Switch color game Crafting game Goal: The observation is given as a bag of words in both environments, where the words correspond to (feature, location). Features consist of item types, names, and their other properties. The locations include position relative to the agent in the maze, and also a few special slots for inventory, current, and target attributes. The first phase building the high level transitions as in Section 2.1.2 is done by running a random agent in the environment from a random state until one or more attributes change. Then this change of attribute is recorded as an edge in graph G. The low level policy is trained concurrently with the second phase of Section 2.1.2. We use the current edge estimates in the graph to propose a target set of attributes, and the low level policy is trained with the Reinforce algorithm BID38 to reach that set of attributes from the current state. These training episodes terminate when the task completes or after 80 steps; and a reward of -0.1 is given at every step to encourage the agent to complete the task quickly. The policy network is a fully connected network with two hidden layers of 100 units. We run each experiment three times with different random seeds, and report the mean success rate. In the switches environment, multi-step (test) tasks are generated by setting a random attribute as target, which can require up to 12 attribute transitions. In the crafting environment, multi-step (test) tasks are generated by randomly selecting an item as a target. Since we do not care about other items in the inventory, the target state is underspecified. Some tasks are pre-solved because the randomly chosen target item can already be in the inventory. However, such tasks have no effect on training and are also removed during testing. In curriculum training which we use as a baseline, we gradually increase the upper bound on the difficulty of tasks. In the switches environment, the difficulty corresponds to the number of toggles necessary for solving the task. The craft environment has two levels of difficulty: tasks can be completed by a single grab or craft action, and tasks that require multiple such actions. During the second phase of training, we simultaneously compute the transition function c π (ρ i, ρ j) using an exponentially decaying average of success rates of the low level policy π: DISPLAYFORM1 where T is the number of training epochs, N t π is the number of task (ρ i, ρ j) during epoch t, and M t π is the number of successful episodes among them. A decay rate of γ = 0.9 is used. We consider a 3D block stacking task in the Mujoco environment BID35. There are 4 blocks of different colors, and actions consist of dropping a block in a 3 × 3 grid of positions, ing in 36 total actions. A block cannot be moved when it is underneath another block, so some actions have no effect. The input to the model is the observed image, and there are a total of 36 binary properties corresponding to the relative x and y positions of the blocks and whether blocks are stacked on one another. For example, one property corresponds to "blue is on top of yellow". Each training episode is initiated from a random initial state and lasts only one step, i.e. dropping a single block in a new location. The policy network takes (i) a 128 × 128 image, which is featurized by a CNN with five convolutional layers and one fully connected (fc) layer to produce a 128d vector; and (ii) goal properties expressed as a 48d binary vector, which are transformed to a 128d vector. The two 128d vectors are concatenated and combined by two fc layers followed by softmax to produce an output distribution over actions. We use an exponential linear nonlinearity after each layer. TAB2 compares the performance of different models on several block stacking tasks. In the onestep task, a goal is chosen that is the of taking a single random action. In the multi-step task, the goal is chosen as the properties of a new random initialization. These tasks typically require 3 − 8 steps to complete. In the 4-stack task, the goal is a vertical stack of blocks in the order red, green, blue, yellow. We compare the performance of our model to reactive policy baselines trained on single-step tasks, complex multi-step tasks or with a curriculum of both. We perform each evaluation task 1000 times. The single-step reactive policies perform well on single step tasks (which are what it sees at train time), but perform much worse compared to the AP model when transferred to multi-step tasks. The AP model without the second step of training that learns c π also performs substantially worse on multi-step tasks, demonstrating the importance of properly normalized transition probabilities to avoid aliased states. The rightmost two columns of TAB2 consider underspecified goals, where only a subset of the attributes are provided. These are identical to their fully-specified counterparts, except that each attribute is left unspecified with probability 30%. The AP model handles these naturally by finding the shorted path to any satisfactory attribute set. We consider reactive baselines that are trained on the same distribution of underspecified attribute sets. Despite this, we observe that reactive policy performance degrades when goals are underspecified, while our AP model does not. The attribute detectorf predicts the full attribute set with < 0.1% error when trained on the full dataset of 1 million examples. If trained on only 10,000 examples, the attribute detector has an error rate of 1.4%. Training the AP model with this less-accurate attribute detector degrades multi-step performance by only 0.9%.Property Aliasing: The "ignorability" assumption we made in Section 2 is violated in the block stacking task. To see why, consider a transition from "red left of blue and yellow" to "red right of blue and yellow". This can typically be accomplished in one step, but if blue and yellow are already on the far right, it cannot. Thus, states where this transition are possible and impossible are aliased with the same properties. This is the dominant source of errors on the multi-step task when trained on large sample sizes (in fact, it is the only source of errors as the policy approaches 100% Table 3 : Effect of the number of (one-step) training examples on one-step and multi-step performance, for an inverse model and the Attribute Planner model. The inverse models are trained on 2x the samples, including the samples generated from learning c π in our AP method.accuracy and the graph becomes complete). FIG4 shows an example plan that becomes stuck due to aliasing. The second step of training, that computes the probability of π transitioning on each edge, is important for mitigating the effects of aliasing in the block stacking task. The graph search finds the path with the highest probability of success (i.e. the product of probabilities on each edge), so it avoids edges that have high aliasing. In the AP model trained on one million samples, the second step of training improves multi-step performance from 29.7% to 66.7%, as shown in TAB2. Our show that structuring the space of tasks with high level attributes allows an agent to compose policies for the solutions of simple tasks into solutions of more complex tasks. The agent plans a path to the final goal at the level of the attributes, and executes the steps in this path with a reactive policy. Thus, supervision of an agent by labeling attributes can lead to generalization from simple tasks at train time to more complex tasks at test time. Nevertheless, there are many fronts for further work:Sample complexity of the planning module: In Table 5 we can see both the benefits and the liabilities of the explicit non-parametric form for c. By 10K samples, the parametric lower level policy is already able to have a reasonable success rate. However, because in this environment, there are roughly 200K edges in the graph, most of the edges have not been seen, and without any weight-sharing, our model cannot estimate these transition probabilities. On the other hand, by 100K samples the model has seen enough of the graph to make nontrivial plans; and the non-parametric form of the graph makes planning straightforward. In future work, we hope to combine parametric models for c with search to increase the sample efficiency of the planning module. Alternatively, In frame 4 of this example, the policy is directed to place the green block in front of the red and blue blocks, but this is impossible because the blue and red are already in the frontmost position.we might hope to make progress on dynamic abstraction (projecting out some of the attributes) depending on the current state and goal, which would make the effective number of edges of the graph smaller. Exploration Although we discuss an agent in an environment, we have elided many of the difficult problems of reinforcement learning. In particular, the environments considered in this work allow sampling low level transitions by starting at random states and following random policies, and these are sufficient to cover the state space, although we note that the method for training the model described in Section 2.1 allows for more sophisticated exploration policies. Thus we sidestep the exploration problem, one of the key difficulties of reinforcement learning. Nevertheless, building composable models even in this setting is nontrivial, and our view is that it is important to demonstrate success here (and decouple issues of exploration and composability) before moving on to the full RL problem. We believe that the attributes ρ and c, in addition to their usefulness for planning, provide a framework for incentivizing exploration. The agent can be rewarded for finding unseen (or rarely-seen) high level transitions, or for validating or falsifying hypotheses about the existence of entries of c. Learning the attributes: Discovering the attributes automatically would remove much of the need for human supervision. Recent work, such as BID33, demonstrates how this could be done. Another avenue for discovering attributes is to use a few "seed" attributes; and use aliasing as a signal that some attributes need to be refined. We also look into the use of exploration and learn a policy to promote exploration of undiscovered edges in our planning graph. We give a negative reward proportional to DISPLAYFORM0 where n is the number of times that edge has been encountered before and t the time episode. We compare this with a baseline method of single random start and random action selection over N and the method used in the main paper, with N random starts and single rollouts. Our Mazebase environments in Section 4.1 are designed so that all edges can be discovered without much exploration. So to better test the benefit of exploration, we modified the craft environment to make discovering all edges harder. First, we added 4 new "super" products that can be crafted by combining one normal product with another resource, or three resources. Second, the environment always starts with only 3 resources and an empty inventory. Therefore, it is much harder to discover a super product because it requires agent to craft a product out of two resources, and then pick another resource and craft. In this hard crafting environment, a random agent discovered 18.6 edges on average, while an agent with the exploration reward discovered all 25 edges of the environment. We used this complete graph to train our AP model and other baselines. Training on one-step tasks require episodes to start from different nodes of the graph, but the environment always initializes at the same node. A simple solution was not to reset the environment between episodes if the previous episode was successful. Thus the next episodes will start from a different node. TAB3 shows the success rates on multi-step tasks, where our AP model clearly outperforms the other baselines. Training data Hard Crafting (multi-step)Reinforce multi-step + curriculum 51.5% Reinforce one-step 26.0% AP one-step 99.8% Table 5: Effect of the number of (one-step) training examples on one-step and multi-step performance, for an inverse model and the Attribute Planner model. The inverse models are trained on 2x the samples, including the samples generated from learning c π in our AP method. | Compositional attribute-based planning that generalizes to long test tasks, despite being trained on short & simple tasks. | 1,228 | scitldr |
The ability to learn new concepts with small amounts of data is a critical aspect of intelligence that has proven challenging for deep learning methods. Meta-learning has emerged as a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks. However, most meta-learning algorithms implicitly require that the meta-training tasks be mutually-exclusive, such that no single model can solve all of the tasks at once. For example, when creating tasks for few-shot image classification, prior work uses a per-task random assignment of image classes to N-way classification labels. If this is not done, the meta-learner can ignore the task training data and learn a single model that performs all of the meta-training tasks zero-shot, but does not adapt effectively to new image classes. This requirement means that the user must take great care in designing the tasks, for example by shuffling labels or removing task identifying information from the inputs. In some domains, this makes meta-learning entirely inapplicable. In this paper, we address this challenge by designing a meta-regularization objective using information theory that places precedence on data-driven adaptation. This causes the meta-learner to decide what must be learned from the task training data and what should be inferred from the task testing input. By doing so, our algorithm can successfully use data from non-mutually-exclusive tasks to efficiently adapt to novel tasks. We demonstrate its applicability to both contextual and gradient-based meta-learning algorithms, and apply it in practical settings where applying standard meta-learning has been difficult. Our approach substantially outperforms standard meta-learning algorithms in these settings. The ability to learn new concepts and skills with small amounts of data is a critical aspect of intelligence that many machine learning systems lack. Meta-learning has emerged as a promising approach for enabling systems to quickly learn new tasks by building upon experience from previous related tasks (; ; ; ;). Meta-learning accomplishes this by explicitly optimizing for few-shot generalization across a set of meta-training tasks. The meta-learner is trained such that, after being presented with a small task training set, it can accurately make predictions on test datapoints for that meta-training task. While these methods have shown promising , current methods require careful design of the meta-training tasks to prevent a subtle form of task overfitting, distinct from standard overfitting in supervised learning. If the task can be accurately inferred from the test input alone, then the task training data can be ignored while still achieving low meta-training loss. In effect, the model will collapse to one that makes zero-shot decisions. This presents an opportunity for overfitting where the meta-learner generalizes on meta-training tasks, but fails to adapt when presented with training data from novel tasks. We call this form of overfitting the memorization problem in meta-learning because the meta-learner memorizes a function that solves all of the meta-training tasks, rather than learning to adapt. Existing meta-learning algorithms implicitly resolve this problem by carefully designing the metatraining tasks such that no single model can solve all tasks zero-shot; we call tasks constructed in this Implementation and examples available here: https://github.com/google-research/ google-research/tree/master/meta_learning_without_memorization. way mutually-exclusive. For example, for N -way classification, each task consists of examples from N randomly sampled classes. The N classes are labeled from 1 to N, and critically, for each task, we randomize the assignment of classes to labels {1, 2, . . ., N} (visualized in Appendix Figure 3). This ensures that the task-specific class-to-label assignment cannot be inferred from a test input alone. However, the mutually-exclusive tasks requirement places a substantial burden on the user to cleverly design the meta-training setup (e.g., by shuffling labels or omitting goal information). While shuffling labels provides a reasonable mechanism to force tasks to be mutually-exclusive with standard few-shot image classification datasets such as MiniImageNet , this solution cannot be applied to all domains where we would like to utilize meta-learning. For example, consider meta-learning a pose predictor that can adapt to different objects: even if N different objects are used for meta-training, a powerful model can simply learn to ignore the training set for each task, and directly learn to predict the pose of each of the N objects. However, such a model would not be able to adapt to new objects at meta-test time. The primary contributions of this work are: 1) to identify and formalize the memorization problem in meta-learning, and 2) to propose an meta-regularizer (MR) using information theory as a general approach for mitigating this problem without placing restrictions on the task distribution. We formally differentiate the meta-learning memorization problem from overfitting problem in conventional supervised learning, and empirically show that naïve applications of standard regularization techniques do not solve the memorization problem in meta-learning. The key insight of our metaregularization approach is that the model acquired when memorizing tasks is more complex than the model that from task-specific adaptation because the memorization model is a single model that simultaneously performs well on all tasks. It needs to contain all information in its weights needed to do well on test points without looking at training points. Therefore we would expect the information content of the weights of a memorization model to be larger, and hence the model should be more complex. As a , we propose an objective that regularizes the information complexity of the meta-learned function class (motivated by ;). Furthermore, we show that meta-regularization in MAML can be rigorously motivated by a PAC-Bayes bound on generalization. In a series of experiments on non-mutually-exclusive task distributions entailing both few-shot regression and classification, we find that memorization poses a significant challenge for both gradient-based and contextual (a) meta-learning methods, ing in near random performance on test tasks in some cases. Our meta-regularization approach enables both of these methods to achieve efficient adaptation and generalization, leading to substantial performance gains across the board on non-mutually-exclusive tasks. We focus on the standard supervised meta-learning problem (see, e.g.,). Briefly, we assume tasks T i are sampled from a task distribution p(T). During meta-training, for each task, we observe a set of training data D i = (x i, y i) and a set of test data D * i = (x * i, y * i) with x i = (x i1, . . ., x iK), y i = (y i1, . . ., y iK) sampled from p(x, y|T i), and similarly for D * i. We denote the entire meta-training set as. The goal of meta-training is to learn a model for a new task T by leveraging what is learned during meta-training and a small amount of training data for the new task D. We use θ to denote the meta-parameters learned during meta-training and use φ to denote the task-specific parameters that are computed based on the task training data.; , given a meta-training set M, we consider meta-learning algorithms that maximize conditional likelihood q(ŷ * = y * |x *, θ, D), which is composed of three distributions: q(θ|M) that summarizes meta-training data into a distribution on metaparameters, q(φ|D, θ) that summarizes the per-task training set into a distribution on task-specific parameters, and q(ŷ * |x *, φ, θ) that is the predictive distribution. These distributions are learned to minimize − For example, in MAML , θ and φ are the weights of a predictor network, q(θ|M) is a delta function learned over the meta-training data, q(φ|D, θ) is a delta function centered at a point defined by gradient optimization, and φ parameterizes the predictor network q(ŷ * |x *, φ) . In particular, to determine the task-specific parameters φ, the task training data D and θ are used in the predictor model φ = θ + α K (x,y)∈D ∇ θ log q(y|x, φ = θ). Another family of meta-learning algorithms are contextual methods , such as conditional neural processes (CNP) (b; a). CNP instead defines q(φ|D, θ) as a mapping from D to a summary statistic φ (parameterized by θ). In particular, φ = a θ • h θ (D) is the output of an aggregator a θ (·) applied to features h θ (D) extracted from the task training data. Then θ parameterizes a predictor network that takes φ and x * as input and produces a predictive distribution q(ŷ * |x *, φ, θ). In the following sections, we describe a common pitfall for a variety of meta-learning algorithms, including MAML and CNP, and a general meta-regularization approach to prevent this pitfall. The ideal meta-learning algorithm will learn in such a way that generalizes to novel tasks. However, we find that unless tasks are carefully designed, current meta-learning algorithms can overfit to the tasks and end up ignoring the task training data (i.e., either q(φ|D, θ) does not depend on D or q(ŷ * |x *, φ, θ) does not depend on φ, as shown in Figure 1 ), which can lead to poor generalization. This memorization phenomenon is best understood through examples. Consider a 3D object pose prediction problem (illustrated in Figure 1), where each object has a fixed canonical pose. The (x, y) pairs for the task are 2D grey-scale images of the rotated object (x) and the rotation angle relative to the fixed canonical pose for that object (y). In the most extreme case, for an unseen object, the task is impossible without using D because the canonical pose for the unseen object is unknown. The number of objects in the meta-training dataset is small, so it is straightforward for a single network to memorize the canonical pose for each training object and to infer the object from the input image (i.e., task overfitting), thus achieving a low training error without using D. However, by construction, this solution will necessarily have poor generalization to test tasks with unseen objects. As another example, imagine an automated medical prescription system that suggests medication prescriptions to doctors based on patient symptoms and the patient's previous record of prescription responses (i.e., medical history) for adaptation. In the meta-learning framework, each patient represents a separate task. Here, the symptoms and prescriptions have a close relationship, so we cannot assign random prescriptions to symptoms, in contrast to the classification tasks where we can randomly shuffle the labels to create mutually-exclusiveness. For this non-mutually-exclusive task distribution, a standard meta-learning system can memorize the patients' identity information in the training, leading it to ignore the medical history and only utilize the symptoms combined with the memorized information. As a , it may issue highly accurate prescriptions on the meta-training set, but fail to adapt to new patients effectively. While such a system would achieve a baseline level of accuracy for new patients, it would be no better than a standard supervised learning method applied to the pooled data. We formally define (complete) memorization as: Definition 1 (Complete Meta-Learning Memorization). Complete memorization in meta-learning is when the learned model ignores the task training data such that I(ŷ * ; D|x Memorization describes an issue with overfitting the meta-training tasks, but it does not preclude the network from generalizing to unseen (x, y) pairs on the tasks similar to the training tasks. Memorization becomes an undesired problem for generalization to new tasks when I(y * ; D|x *) I(ŷ * ; D|x *, θ) (i.e., the task training data is necessary to achieve good performance, even with exact inference under the data generating distribution, to make accurate predictions). A model with the memorization problem may generalize to new datapoints in training tasks but cannot generalize to novel tasks, which distinguishes it from typical overfitting in supervised learning. In practice, we find that MAML and CNP frequently converge to this memorization solution (Table 2). For MAML, memorization can occur when a particular setting of θ that does not adapt to the task training data can achieve comparable meta-training error to a solution that adapts θ. For example, if a setting of θ can solve all of the meta-training tasks (i.e., for all (x, y) in D and D * the predictive error is close to zero), the optimization may converge to a stationary point of the MAML objective where minimal adaptation occurs based on the task training set (i.e., φ ≈ θ). For a novel task where it is necessary to use the task training data, MAML can in principle still leverage the task training data because the adaptation step is based on gradient descent. However, in practice, the poor initialization of θ can affect the model's ability to generalize from a small mount of data. For CNP, memorization can occur when the predictive distribution network q(ŷ * |x *, φ, θ) can achieve low training error without using the task training summary statistics φ. On a novel task, the network is not trained to use φ, so it is unable to use the information extracted from the task training set to effectively generalize. In some problem domains, the memorization problem can be avoided by carefully constructing the tasks. For example, for N -way classification, each task consists of examples from N randomly sampled classes. If the classes are assigned to a random permutation of N for each task, this ensures that the task-specific class-to-label assignment cannot be inferred from the test inputs alone. As a , a model that ignores the task training data cannot achieve low training error, preventing convergence to the memorization problem. We refer to tasks constructed in this way as mutuallyexclusive. However, the mutually-exclusive tasks requirement places a substantial burden on the user to cleverly design the meta-training setup (e.g., by shuffling labels or omitting goal information) and cannot be applied to all domains where we would like to utilize meta-learning. The training tasks are non-mutually-exclusive because the test data label (right) can be inferred accurately without using task training data (left) in the training tasks, by memorizing the canonical orientation of the meta-training objects. For a new object and canonical orientation (bottom), the task cannot be solved without using task training data (bottom left) to infer the canonical orientation. Right: Graphical model for meta-learning. Observed variables are shaded. Without either one of the dashed arrows,Ŷ * is conditionally independent of D given θ and X *, which we refer to as complete memorization (Definition 1). At a high level, the sources of information in the predictive distribution q(ŷ * |x *, θ, D) come from the input, the meta-parameters, and the data. The memorization problem occurs when the model encodes task information in the predictive network that is readily available from the task training set (i.e., it memorizes the task information for each meta-training task). We could resolve this problem by encouraging the model to minimize the training error and to rely on the task training dataset as much as possible for the prediction of y * (i.e., to maximize I(ŷ * ; D|x *, θ)). Explicitly maximizing I(ŷ * ; D|x *, θ) requires an intractable marginalization over task training sets to compute q(ŷ * |x *, θ). Instead, we can implicitly encourage it by restricting the information flow from other sources (x * and θ) toŷ *. To achieve both low error and low mutual information betweenŷ * and (x *, θ), the model must use task training data D to make predictions, hence increasing the mutual information I(ŷ * ; D|x *, θ), leading to reduced memorization. In this section, we describe two tractable ways to achieve this. Given θ, the statistical dependency between x * andŷ * is controlled by the direct path from x * toŷ * and the indirect path through D (see Figure 1), where the latter is desirable because it leverages the task training data. We can control the information flow between x * and y * by introducing an intermediate stochastic bottleneck variable z et al., 2016) as shown in Figure 4. Now, we would like to maximize I(ŷ * ; D|z *, θ) to prevent memorization. We can bound this mutual information by where r(z *) is a variational approximation to the marginal, the first inequality follows from the statistical dependencies in our model (see Figure 4 and Appendix A.2 for the proof), and we use the fact that z * is conditionally independent of D given x * and θ. By simultaneously minimizing and maximizing the mutual information I(x * ;ŷ * |D, θ), we can implicitly encourage the model to use the task training data D. For non-mutually exclusive problems, the true label y * is dependent on x *. Hence if I(x * ;ŷ * |D, θ) = 0 (i.e., the predictionŷ * is independent of x * given the task training data and θ) the predictive likelihood will be low. This suggests replacing the maximization of I(x * ;ŷ * |D, θ) with minimization of the training loss in Eq., ing in the following regularized training objective where β modulates the regularizer and we set r(z *) as N (z * ; 0, I). We refer to this regularizer as meta-regularization (MR) on the activations. As we demonstrate in Section 6, we find that this regularizer performs well, but in some cases can fail to prevent the memorization problem. Our hypothesis is that in these cases, the network can sidestep the information constraint by storing the prediction of y * in a part of z *, which incurs only a small penalty. Alternatively, we can penalize the task information stored in the meta-parameters θ. Here, we provide an informal argument and provide the complete argument in Appendix A.3. Analogous to the supervised setting , given meta-training dataset M, we consider θ as random variable where the randomness can be introduced by training stochasticity. We model the stochasticity over θ with a Gaussian distribution N (θ; θ µ, θ σ) with learned mean and variance parameters per dimension . By penalizing I(y * 1:N, D 1:N ; θ|x * 1:N), we can limit the information about the training tasks stored in the metaparameters θ and thus require the network to use the task training data to make accurate predictions. We can tractably upper bound it by where r(θ) is a variational approximation to the marginal, which we set to N (θ; 0, I). In practice, we apply meta-regularization to the meta-parameters θ that are not used to adapt to the task training data and denote the other parameters asθ. In this way, we control the complexity of the network that can predict the test labels without using task training data, but we do not limit the complexity of the network that processes the task training data. Our final meta-regularized objective can be written as For MAML, we apply meta-regularization to the parameters uninvolved in the task adaptation. For CNP, we apply meta-regularization to the encoder parameters. The detailed algorithms are shown in Algorithm 1 and 2 in the appendix. Now that we have derived meta regularization approaches for mitigating the memorization problem, we theoretically analyze whether meta regularization leads to better generalization via a PAC-Bayes bound. In particular, we study meta regularization (MR) on the weights (W) of MAML, i.e. MR-MAML (W), as a case study. Meta regularization on the weights of MAML uses a Gaussian distribution N (θ; θ µ, θ σ) to model the stochasticity in the weights. Given a task and task training data, the expected error is given by where the prediction loss L(x *, y *, φ i) is bounded 1. Then, we would like to minimize the error on novel tasks We only have a finite sample of training tasks, so computing er(Q) is intractable, but we can form an empirical estimate: where for exposition we have assumed |D * i | = K are the same for all tasks. We would like to, but the challenge is that θ µ and θ σ are derived from the meta-training tasks There are two sources of generalization error: (i) error due to the finite number of observed tasks and (ii) error due to the finite number of examples observed per task. Closely following the arguments in , we apply a standard PAC-Bayes bound to each of these and combine the with a union bound, ing in the following Theorem. Theorem 1. Let P (θ) be an arbitrary prior distribution over θ that does not depend on the metatraining data. Then for any δ ∈, with probability at least 1 − δ, the following inequality holds uniformly for all choices of θ µ and θ σ, where n is the number of meta-training tasks and K is the number of per-task validation datapoints. We defer the proof to the Appendix A.4. The key difference from the in is that we leverage the fact that the task training data is split into training and validation. In practice, we set P (θ) = r(θ) = N (θ; 0, I). If we can achieve a low value for the bound, then with high probability, our test error will also be low. As shown in the Appendix A.4, by a first order Taylor expansion of the the second term of the RHS in Eq. and setting the coefficient of the KL, we recover the MR-MAML(W) objective (Eq.). β tradesoff between the tightness of the generalization bound and the probability that it holds true. The of this bound suggests that the proposed meta-regularization on weights does indeed improve generalization on the meta-test set. 1 In practice, L(x *, y *, φi) is MSE on a bounded target space or classification accuracy. We optimize the negative log-likelihood as a bound on the 0-1 loss. Previous works have developed approaches for mitigating various forms of overfitting in metalearning. These approaches aim to improve generalization in several ways: by reducing the number of parameters that are adapted in MAML , by compressing the task embedding , through data augmentation from a GAN , by using an auxiliary objective on task gradients , and via an entropy regularization objective . These methods all focus on the setting with mutually-exclusive task distributions. We instead recognize and formalize the memorization problem, a particular form of overfitting that manifests itself with non-mutually-exclusive tasks, and offer a general and principled solution. Unlike prior methods, our approach is applicable to both contextual and gradientbased meta-learning methods. We additionally validate that prior regularization approaches, namely TAML , are not effective for addressing this problem setting. Our derivation uses a Bayesian interpretation of meta-learning (; ; ; ; ; ;). Some Bayesian meta-learning approaches place a distributional loss on the inferred task variables to constrain them to a prior distribution (b; ;), which amounts to an information bottleneck on the latent task variables.;; aim to produce simpler or more compressed task adaptation processes. Our approach does the opposite, penalizing information from the inputs and parameters, to encourage the task-specific variables to contain greater information driven by the per-task data. We use PAC-Bayes theory to study the generalization error of meta-learning and meta-regularization. extends the single task PAC-Bayes bound to the multitask setting, which quantifies the gap between empirical error on training tasks and the expected error on new tasks. More recent research shows that, with tightened generalization bounds as the training objective, the algorithms can reduce the test error for mutually-exclusive tasks . Our analysis is different from these prior works in that we only include preupdate meta parameters in the generalization bound rather than both pre-update and post-update parameters. In the derivation, we also explicitly consider the splitting of data into the task training set and task validation set, which is aligned with the practical setting. The memorization problem differs from overfitting in conventional supervised learning in several aspects. First, memorization occurs at the task level rather than datapoint level and the model memorizes functions rather than labels. In particular, within a training task, the model can generalize to new datapoints, but it fails to generalize to new tasks. Second, the source of information for achieving generalization is different. For meta-learning the information is from both the meta-training data and new task training data but in standard supervised setting the information is only from training data. Finally, the aim of regularization is different. In the conventional supervised setting, regularization methods such as weight decay , dropout , the information bottleneck , and Bayes-by-Backprop are used to balance the network complexity and the information in the data. The aim of meta-regularization is different. It governs the model complexity to avoid one complex model solving all tasks, while allowing the model's dependency on the task data to be complex. We further empirically validate this difference, finding that standard regularization techniques do not solve the memorization problem. In the experimental evaluation, we aim to answer the following questions: How prevalent is the memorization problem across different algorithms and domains? How does the memorization problem affect the performance of algorithms on non-mutually-exclusive task distributions? Is our meta-regularization approach effective for mitigating the problem and is it compatible with multiple types of meta-learning algorithms? Is the problem of memorization empirically distinct from that of the standard overfitting problem? To answer these questions, we propose several meta-learning problems involving non-mutuallyexclusive task distributions, including two problems that are adapted from prior benchmarks with mutually-exclusive task distributions. We consider model-agnostic meta-learning (MAML) and conditional neural processes (CNP) as representative meta-learning algorithms. We study both variants of our method in combination with MAML and CNP. When comparing with meta-learning algorithms with and without meta-regularization, we use the same neural network architecture, while other hyperparameters are tuned via cross-validation per-problem. First, we consider a toy sinusoid regression problem that is non-mutually-exclusive. The data for each task is created in the following way: the amplitude A of the sinusoid is uniformly sampled from a set of 20 equally-spaced points {0.1, 0.3, · · ·, 4}; u is sampled uniformly from [−5, 5] and y is sampled from N (A sin(u), 0.1 2 ). We provide both u and the amplitude A (as a one-hot vector) as input, i.e. x = (u, A). At the test time, we expand the range of the tasks by randomly sampling the data-generating amplitude A uniformly from [0.1, 4] and use a random one-hot vector for the input to the network. The meta-training tasks are a proper subset of the meta-test tasks. Without the additional amplitude input, both MAML and CNP can easily solve the task and generalize to the meta-test tasks. However, once we add the additional amplitude input which indicates the task identity, we find that both MAML and CNP converge to the complete memorization solution and fail to generalize well to test data (Table 1 and Appendix Figures 7 and 8). Both meta-regularized MAML and CNP (MR-MAML) and (MR-CNP) instead converge to a solution that adapts to the data, and as a , greatly outperform the unregularized methods. Table 1: Test MSE for the non-mutually-exclusive sinusoid regression problem. We compare MAML and CNP against meta-regularized MAML (MR-MAML) and meta-regularized CNP (MR-CNP) where regularization is either on the activations (A) or the weights (W). We report the mean over 5 trials and the standard deviation in parentheses. To illustrate the memorization problem on a more realistic task, we create a multi-task regression dataset based on the Pascal 3D data (See Appendix A.5.1 for a complete description). We randomly select 50 objects for meta-training and the other 15 objects for meta-testing. For each object, we use MuJoCo to render images with random orientations of the instance on a table, visualized in Figure 1. For the meta-learning algorithm, the observation (x) is the 128 × 128 gray-scale image and the label (y) is the orientation relative to a fixed canonical pose. Because the number of objects in the meta-training dataset is small, it is straightforward for a single network to memorize the canonical pose for each training object and to infer the orientation from the input image, thus achieving a low meta-training error without using D. However, this solution performs poorly at the test time because it has not seen the novel objects and their canonical poses. Optimization modes and hyperparameter sensitivity. We choose the learning rate from {0.0001, 0.0005, 0.001} for each method, β from {10 −6, 10 −5, · · ·, 1} for meta-regularization and report the with the best hyperparameters (as measured on the meta-validation set) for each method. In this domain, we find that the convergence point of the meta-learning algorithm is determined by both the optimization landscape of the objective and the training dynamics, which vary due to stochastic gradients and the random initialization. In particular, we observe that there are two modes of the objective, one that corresponds to complete memorization and one that corresponds to successful adaptation to the task data. As illustrated in the Appendix, we find that models that converge to a memorization solution have lower training error than solutions which use the task training data, indicating a clear need for meta-regularization. When the meta-regularization is on the activations, the solution that the algorithms converge to depends on the learning rate, while MR on the weights consistently converges to the adaptation solution (See Appendix Figure 9 for the sensitivity analysis). This suggests that MR on the activations is not always successful at preventing memorization. Our hypothesis is that there exists a solution in which the bottlenecked activations encode only the prediction y *, and discard other information. Such a solution can achieve both low training MSE and low regularization loss without using task training data, particularly if the predicted label contains a small number of bits (i.e., because the activations will have low information complexity). However, note that this solution does not achieve low regularization error when applying MR to the weights because the function needed to produce the predicted label does not have low information complexity. As a , meta-regularization on the weights does not suffer from this pathology and is robust to different learning rates. Therefore, we will use regularization on weights as the proposed methodology in the following experiments and algorithms in Appendix A.1. Quantitative . We compare MAML and CNP with their meta-regularized versions (Table 2). We additionally include fine-tuning as baseline, which trains a single network on all the instances jointly, and then fine-tunes on the task training data. Meta-learning with meta-regularization (on weights) outperforms all competing methods by a large margin. We show test error as a function of the meta-regularization coefficient β in Appendix Figure 2. The curve reflects the trade-off when changing the amount of information contained in the weights. This indicates that β gives a knob that allows us to tune the degree to which the model uses the data to adapt versus relying on the prior. We observe β provides us a knob with which we can control the degree to which the algorithm adapts versus memorizes. When β is small, we observe memorization, leading to large test error; when β is too large, the network does not store enough information in the weights to perform the task. Crucially, in the middle of these two extremes, meta-regularization is effective in inducing adaptation, leading to good generalization. The plot shows the mean and standard deviation across 5 meta-training runs. Comparison to standard regularization. We compare our meta-regularization with standard regularization techniques, weight decay and Bayes-by-Backprop , in Table 3. We observe that simply applying standard regularization to all the weights, as in conventional supervised learning, does not solve the memorization problem, which validates that the memorization problem differs from the standard overfitting problem. Next, we study memorization in the few-shot classification problem by adapting the few-shot Omniglot and MiniImagenet bench-marks to the non-mutually-exclusive setting. In the non-mutually-exclusive N-way K-shot classification problem, each class is (randomly) assigned a fixed classification label from 1 to N. For each task, we randomly select a corresponding class for each classification label and K task training data points and K task test data points from that class 2. This ensures that each class takes only one classification label across tasks and different tasks are non-mutually-exclusive (See Appendix A.5.2 for details). We evaluate MAML, TAML , MR-MAML (ours), fine-tuning, and a nearest neighbor baseline on non-mutually-exclusive classification tasks (Table 4). We find that MR-MAML significantly outperforms previous methods on all of these tasks. To better understand the problem, for the MAML variants, we calculate the pre-update accuracy (before adaptation on the task training data) on the meta-training data in Appendix Table 5. The high pre-update meta-training accuracy and low meta-test accuracy are evidence of the memorization problem for MAML and TAML, indicating that it is learning a model that ignores the task data. In contrast, MR-MAML successfully controls the pre-update accuracy to be near chance and encourages the learner to use the task training data to achieve low meta-training error, ing in good performance at meta-test time. Finally, we verify that meta-regularization does not degrade performance on the standard mutuallyexclusive task. We evaluate performance as a function of regularization strength on the standard 20-way 1-shot Omniglot task (Appendix Figure 10), and we find that small values of β lead to slight improvements over MAML. This indicates that meta-regularization substantially improves performance in the non-mutually-exclusive setting without degrading performance in other settings. Table 4: Meta-test accuracy on non-mutually-exclusive (NME) classification. The fine-tuning and nearestneighbor baseline for MiniImagenet are from . Meta-learning has achieved remarkable success in few-shot learning problems. However, we identify a pitfall of current algorithms: the need to create task distributions that are mutually exclusive. This requirement restricts the domains that meta-learning can be applied to. We formalize the failure mode, i.e. the memorization problem, that from training on non-mutually-exclusive tasks and distinguish it as a function-level overfitting problem compared to the the standard label-level overfitting in supervised learning. We illustrate the memorization problem with different meta-learning algorithms on a number of domains. To address the problem, we propose an algorithm-agnostic meta-regularization (MR) approach that leverages an information-theoretic perspective of the problem. The key idea is that by placing a soft restriction on the information flow from meta-parameters in prediction of test set labels, we can encourage the meta-learner to use task training data during meta-training. We achieve this by successfully controlling the complexity of model prior to the task adaptation. The memorization issue is quite broad and is likely to occur in a wide range of real-world applications, for example, personalized speech recognition systems, learning robots that can adapt to different environments , and learning goal-conditioned manipulation skills using trial-and-error data. Further, this challenge may also be prevalent in other conditional prediction problems, beyond meta-learning, an interesting direction for future study. By both recognizing the challenge of memorization and developing a general and lightweight approach for solving it, we believe that this work represents an important step towards making meta-learning algorithms applicable to and effective on any problem domain. We present the detailed algorithm for meta-regularization on weights with conditional neural processes (CNP) in Algorithm 1 and with model-agnostic meta-learning (MAML) in Algorithm 2. For CNP, we add the regularization on the weights θ of encoder and leave other weightsθ unrestricted. For MAML, we similarly regularize the weights θ from input to an intermediate hidden layer and leave the weightsθ for adaptation unregularized. In this way, we restrict the complexity of the pre-adaptation model not the post-adaptation model. Algorithm 1: Meta-Regularized CNP input: Task distribution p(T); Encoder weights distribution q(θ; τ) = N (θ; τ) with Gaussian parameters τ = (θ µ, θ σ); Prior distribution r(θ) and Lagrangian multiplier β;θ that parameterizes feature extractor hθ(·) and decoder Tθ(·). Stepsize α. output: Network parameter τ,θ. Initialize τ,θ randomly; while not converged do Sample a mini-batch of {T i} from p(T); Sample θ ∼ q(θ; τ) with reparameterization; Algorithm 2: Meta-Regularized MAML input: Task distribution p(T); Weights distribution q(θ; τ) = N (θ; τ) with Gaussian parameters τ = (θ µ, θ σ); Prior distribution r(θ) and Lagrangian multiplier β; Stepsize α, α. output: Network parameter τ,θ. Initialize τ,θ randomly; while not converged do Sample a mini-batch of {T i} from p(T); Sample θ ∼ q(θ; τ) with reparameterization; Compute task specific parameter φ i =θ + α ∇θ log q(y i |z i,θ); Updateθ ←θ + α∇θ Ti log q(y Algorithm 3: Meta-Regularized Methods in Meta-testing input : Meta-testing task T with training data D = (x, y) and testing input x *, optimized parameters τ,θ. (hθ(z k, y) ) for MR-CNP and We show that I(x * ;ŷ * |D, z *, θ) ≤ I(ŷ * ; D|z *, θ). By Figure 4, we have that I(ŷ * ; x * |θ, D, z *) = 0. By the chain rule of mutual information we have A.3 META REGULARIZATION ON WEIGHTS Similar to , we use ξ to denote the unknown parameters of the true data generating distribution. This defines a joint distribution The meta-training loss in Eq. 1 is an upper bound for the cross entropy H p,q (y Here the only negative term is the I(y * 1:N, D 1:N ; θ|x * 1:N, ξ), which quantifies the information that the meta-parameters contain about the meta-training data beyond what can be inferred from the data generating parameters (i.e., memorization). Without proper regularization, the cross entropy loss can be minimized by maximizing this term. We can control its value by upper bounding it where the second equality follows because θ and ξ are conditionally independent given M. This gives the regularization in Section 4.2. First, we prove a more general and then specialize it. The goal of the meta-learner is to extract information about the meta-training tasks and the test task training data to serve as a prior for test examples from the novel task. This information will be in terms of a distribution Q over possible models. When learning a new task, the meta-learner uses the training task data D and a model parameterized by θ (sampled from Q(θ)) and outputs a distribution q(φ|D, θ) over models. Our goal is to learn Q such that it performs well on novel tasks. To formalize this, define where L(φ(x *), y * ) is a bounded loss in. Then, we would like to minimize the error on novel tasks Because we only have a finite training set, computing er(Q) is intractable, but we can form an empirical estimate: where for exposition we assume K = |D * i | is the same for all i. We would like to relate er(Q) and n due to the learning algorithm. There are two sources of generalization error: (i) error due to the finite number of observed tasks and (ii) error due to the finite number of examples observed per task. Closely following the arguments in , we apply a standard PAC-Bayes bound to each of these and combine the with a union bound. Theorem. Let Q(θ) be a distribution over parameters θ and let P (θ) be a prior distribution. Then for any δ ∈, with probability at least 1 − δ, the following inequality holds uniformly for all distributions Q, Proof. To start, we state a classical PAC-Bayes bound and use it to derive generalization bounds on task and datapoint level generalization, respectively. Theorem 2. Let X be a sample space (i.e. a space of possible datapoints). Let P (X) be a distribution over X (i.e. a data distribution). Let Θ be a hypothesis space. Given a "loss function" l(θ, X): Θ × X → and a collection of M i.i.d. random variables sampled from P (X), X 1,..., X M, let π be a prior distribution over hypotheses in Θ that does not depend on the samples but may depend on the data distribution P (X). Then, for any δ ∈, the following bound holds uniformly for all posterior distributions ρ over Θ Meta-level generalization First, we bound the task-level generalization, that is we relate er(Q) to, then Theorem 1 says that for any δ 0 ∼ where P is a prior over θ. Within task generalization Next, we relate er(Q,) via the PAC-Bayes bound. For a fixed task i, task training data D i, a prior π(φ|T i) that only depends on the training data, and any δ i ∈, we have that Now, we choose π(φ|T i) to be P (θ)q(φ|θ, D i)dθ and restrict ρ(φ) to be of the form Q(θ)q(φ|θ, D i)dθ for any Q. While, π and ρ may be complicated distributions (especially, if they are defined implicitly), we know that with this choice of π and ρ, D KL (ρ||π) ≤ D KL (Q||P) , hence, we have Overall bound on meta-learner generalization Combining Eq. and using the union bound, we have Choosing δ 0 = δ K+1 and δ i = Kδ n(K+1), then we have: Because n is generally large, by Taylor expansion of the complexity term we have Re-defining the coefficient of KL term as β and omitting the constant and higher order term, we recover the meta-regularization bound in Eq. when Q(θ) = N (θ; θ µ, θ σ). A.5.1 POSE PREDICTION We create a multi-task regression dataset based on the Pascal 3D data . The dataset consists of 10 classes of 3D object such as "aeroplane", "sofa", "TV monitor", etc. Each class has multiple different objects and there are 65 objects in total. We randomly select 50 objects for meta-training and the other 15 objects for meta-testing. For each object, we use MuJoCo to render 100 images with random orientations of the instance on a table, visualized in Figure 1. For the meta-learning algorithm, the observation (x) is the 128 × 128 gray-scale image and the label (y) is the orientation re-scaled to be within. For each task, we randomly sample meta batch-size of 10 tasks per iteration. For MR-CNP, we use a convolutional encoder with a fully connected bottom layer to map the input image to a 20-dimensional latent representation z and z * for task training input x and test input x * respectively. The (z, y) are concatenated and mapped by the feature extractor and aggregator which are fully connected networks to the 200 dimensional task summary statistics φ. The decoder is a fully connected network that maps (φ, z *) to the predictionŷ *. For MR-MAML, we use a convolutional encoder to map the input image to a 14 × 14 dimensional latent representation z and z *. The pairs (z, y) are used in the task adaptation step to get a task specific parameter φ via gradient descent. Then z * is mapped to the predictionŷ * with a convolutional predictor parameterized by φ. The network is trained using 5 gradient steps with learning rate 0.01 in the inner loop for adaptation and evaluated using 20 gradient steps at the test-time. The Omniglot dataset consists of 20 instances of 1623 characters from 50 different alphabets. We randomly choose 1200 characters for meta-training and use the remaining for testing. The metatraining characters are partitioned into 60 disjoint sets for 20-way classification. The MiniImagenet dataset contains 100 classes of images including 64 training classes, 12 validation classes, and 24 test classes. We randomly partition the 64 meta-training classes into 13 disjoint sets for 5-way classification with one label having one less class of images than the others. For MR-MAML we use a convolutional encoder similar to the pose prediction problem. The dimension of z and z * is 14 × 14 for Omniglot and 20 × 20 for MiniImagenet. We use a convolutional decoder for both datasets. Following , we use a meta batch-size of 16 for 20-way Omniglot classification and meta batch-size of 4 for 5-way MiniImagenet classification. The metalearning rate is chosen from {0.001, 0.005} and the β for meta-regularized methods are chosen from {10 −7, 10 −6, . . ., 10 −3}. The optimal hyperparameters are chosen for each method separately via cross-validation. We show a standard few-shot classification setup in meta-learning to illustrate a mutually-exclusive task distribution and a graphical model for the regularization on the activations. As shown in Figures 5, 7 and 8, when meta-learning algorithms converge to the memorization solution, the test tasks must be similar to the train tasks in order to achieve low test error. For CNP, although the task training set contains sufficient information to infer the correct amplitude, this information is ignored and the regression curve at test-time is determined by the one-hot vector. As a , CNP can only generalize to points from the curves it has seen in the training (Figure 7 first row). On the other hand, MAML does use the task training data (Figure 5, 8 and Table 1), however, its performance is much worse than in the mutually-exclusive task. MR-MAML and MR-CNP avoid converging to a memorization solution and achieve excellent test performance on sinusoid task. For each trial, we calculate the mean MSE over 100 randomly generated meta-testing tasks. We report the mean and standard deviation over 5 random trials. and W ∈ R 21×100. For both CNP and MAML, the meta-regularization restricts the part of weights that is connected to A close to 0. Therefore it avoids storing the amplitude information in weights and forces the amplitude to be inferred from the task training data D, hence preventing the memorization problem. Published as a conference paper at ICLR 2020 In Table 5, we report the pre-update accuracy for the non-mutually-exclusive classification experiment in Section 6.3. The pre-update accuracy is obtained by the initial parameters θ rather than the task adapted parameters φ. At the meta-training time, for both MAML and MR-MAML the post-update accuracy obtained by using φ gets close to 1. High pre-update accuracy reflects the memorization problem. For example, in 20-way 1-shot Omniglot example, the pre-update accuracy for MAML is 99.2% at the training time, which means only 0.8% improvement in accuracy is due to adaptation, so the task training data is ignored to a large extent. The pre-update training accuracy for MR-MAML is 5%, which means 95% improvement in accuracy during training is due to the adaptation. This explains why in Table 4, the test accuracy of MR-MAML is much higher than that of MAML at the test-time, since the task training data is used to achieve fast adaptation. Table 5: Meta-training pre-update accuracy on non-mutually-exclusive classification. MR-MAML controls the meta-training pre-update accuracy close to random guess and achieves low training error after adaptation. ization strength β on the mutually-exclusive 20-way 1-shot Omniglot problem. The plot shows the mean and standard deviation across 5 meta-training runs. When β is small, MR-MAML slightly outperforms MAML, indicating that meta-regularization does not degrade performance on mutually-exclusive tasks. The accuracy numbers are not directly comparable to previous work (e.g., ) because we do not use data augmentation. | We identify and formalize the memorization problem in meta-learning and solve this problem with novel meta-regularization method, which greatly expand the domain that meta-learning can be applicable to and effective on. | 1,229 | scitldr |
Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule. Both these approaches pose challenges. On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour. On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation. In this work we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations. WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution. Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner. Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates. WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems. We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning. to learn implies inferring a learning strategy from some set of past experiences via a meta-learner that a task-learner can leverage when learning a new task. One approach is to directly parameterise an update rule via the memory of a recurrent neural network (; ; ;). Such memory-based methods can, in principle, represent any learning rule by virtue of being universal function approximators (; ; Schäfer &). They can also scale to long learning processes by using truncated backpropagation through time, but they lack an inductive bias as to what constitutes a reasonable learning rule. This renders them hard to train and brittle to generalisation as their parameter updates have no guarantees of convergence. An alternative family of approaches defines a gradient-based update rule and meta-learns a shared initialisation that facilitates task adaptation across a distribution of tasks (; ;). Such methods are imbued with a strong inductive biasgradient descent-but restrict knowledge transfer to the initialisation. Recent work has shown that it is beneficial to more directly control gradient descent by meta-learning an approximation of a parameterised matrix (; ;) that preconditions gradients during task training, similarly to second-order and Natural Gradient Descent methods . To meta-learn preconditioning, these methods backpropagate through the gradient descent process, limiting them to few-shot learning. In this paper, we propose a novel framework called Warped Gradient Descent (WarpGrad), that relies on the inductive bias of gradient-based meta-learners by defining an update rule that preconditions gradients, but that is meta-learned using insights from memory-based methods. In particular, we Meta-Learning Task-learners Shared Warp Figure 1: Schematics of WarpGrad. WarpGrad preconditioning is embedded in task-learners f by interleaving warp-layers (ω, ω ) between each task-learner's layers (h, h ). WarpGrad achieve preconditioning by modulating layer activations in the forward pass and gradients in the backward pass by backpropagating through warp-layers (Dω), which implicitly preconditions gradients by some matrix (P). Warp-parameters (φ) are meta-learned over the joint search space induced by task adaptation (E θ [J(φ)]) to form a geometry that facilitates task learning. leverage that gradient preconditioning is defined point-wise in parameter space and can be seen as a recurrent operator of order 1. We use this insight to define a trajectory agnostic meta-objective over a joint parameter search space where knowledge transfer is encoded in gradient preconditioning. To achieve a scalable and flexible form of preconditioning, we take inspiration from works that embed preconditioning in task-learners , but we relax the assumption that task-learners are feed-forward and replace their linear projection with a generic neural network ω, referred to as a warp layer. By introducing non-linearity, preconditioning is rendered data-dependent. This allows WarpGrad to model preconditioning beyond the block-diagonal structure of prior works and enables it to meta-learn over arbitrary adaptation processes. We empirically validate WarpGrad and show it surpasses baseline gradient-based meta-learners on standard few-shot learning tasks (miniImageNet, tieredImageNet; ; ;), while scaling beyond few-shot learning to standard supervised settings on the "multi"-shot Omniglot benchmark and a multi-shot version of tieredImageNet. We further find that WarpGrad outperforms competing methods in a reinforcement learning (RL) setting where previous gradient-based meta-learners fail (maze navigation with recurrent neural networks ) and can be used to meta-learn an optimiser that prevents catastrophic forgetting in a continual learning setting. WarpGrad belongs to the family of optimisation-based meta-learners that parameterise an update rule θ ← U (θ; ξ) with some meta-parameters ξ. Specifically gradient-based meta-learners define an update rule by relying on the gradient descent, U (θ; ξ):= θ − α∇ L(θ) for some objective L and learning rate α. A task is defined by a training set D τ train and a test set D τ test, which defines learning objectives L D τ (θ):= E (x,y)∼D τ [(f (x, θ), y)] over the task-learner f for some loss. MAML meta-learns a shared initialisation θ 0 by backpropagating through K steps of gradient descent across a given task distribution p(τ), θ 0 θ 0 Figure 2: Gradient-based meta-learning. Colours denote different tasks (τ), dashed lines denote backpropagation through the adaptation process, and solid black lines denote optimiser parameter (φ) gradients w.r.t. one step of task parameter (θ) adaptation. Left: A meta-learned initialisation compresses trajectory information into a single initial point (θ 0). Middle: MAML-based optimisers interact with adaptation trajectories at every step and backpropagate through each interaction. Right: WarpGrad is trajectory agnostic. Task adaptation defines an empirical distribution p(τ, θ) over which WarpGrad learns a geometry for adaptation by optimising for steepest descent directions. Subsequent work on gradient-based meta-learning primarily differ in the parameterisation of U. Meta-SGD (MSGD;) learns a vector of learning rates, whereas Meta-Curvature (MC;) defines a block-diagonal preconditioning matrix B, and T-Nets embed block-diagonal preconditioning in feed-forward learners via linear projections, T-Nets. These methods optimise for meta-parameters ξ = {θ 0, φ} by backpropagating through the gradient descent process, as per Eq. 1. This limits them to few-shot learning as they become computationally expensive, susceptible to exploding/vanishing gradients, and susceptible to a credit assignment problem; ). Our goal is to develop a meta-learner that overcomes all three limitations. To do so, we depart from the paradigm of backpropagating to the initialisation and exploit the fact that learning to precondition gradients can be seen as a Markov Process of order 1, a special form of a recurrent update rule . To develop this notion, we first establish a general-purpose form of preconditioning (Section 2.2). Based on this, we obtain a canonical meta-objective from a geometrical point of view (Section 2.3), from which we derive a trajectory-agnostic meta-objective (Section 2.4). A preconditioned gradient descent rule, U (θ; φ):= θ − αP (θ; φ)∇ L(θ), defines a geometry via P. To disentangle the expressive capacity of this geometry from the expressive capacity of the tasklearner f, we take inspiration from T-Nets that embed linear projections T in feed-forward layers, h = σ(T W x+b). This in itself is not sufficient to achieve disentanglement since the parameterisation of T is directly linked to that of W, but it can be achieved under non-linear preconditioning. To this end, we relax the assumption that the task-learner is feed-forward and consider an arbitrary. We insert warp-layers that are universal function approximators parameterised by neural networks into the task-learner without restricting their form or how they interact with f. In the simplest case, we interleave warp-layers between layers of the task-learner to, but other forms of interaction can be beneficial (see Ap- Figure 3: Left: synthetic experiment illustrating how WarpGrad warps gradients (see Appendix D for full details). Each task f ∼ p(f) defines a distinct loss surface (W, bottom row). Gradient descent (black) on these surfaces struggles to find a minimum. WarpGrad meta-learns a warp ω to produce better update directions (magenta; Section 2.4). In doing so, WarpGrad learns a meta-geometry P where standard gradient descent is well behaved (top row). Right: gradient descent in P is equivalent to first-order Riemannian descent in W under a meta-learned Riemann metric (Section 2.3). pendix A for practical guidelines). Backpropagation automatically induces gradient preconditioning, as in T-Nets, but in our case via the Jacobians of the warp-layers: where D x and D θ denote the Jacobian with respect to input and parameters, respectively. In the special case where f is feed-forward and each ω a linear projection, we obtain an instance of WarpGrad that is akin to T-Nets since preconditioning is given by D x ω = T. Conversely, by making warp-layers non-linear, we can induce interdependence between warp-layers, allowing WarpGrad to model preconditioning beyond the block-diagonal structure imposed by prior works. Further, this enables a form of task-conditioning by making Jacobians of warp-layers data dependent. As we have made no assumptions on the form of the task-learner or warp-layers, WarpGrad methods can act on any neural network through any form of warping, including recurrence. We show that increasing the capacity of the meta-learner by defining warp-layers as Residual Networks improves performance on classification tasks (Section 4.1). We also introduce recurrent warp-layers for agents in a gradient-based meta-learner that is the first, to the best of our knowledge, to outperform memory-based meta-learners on a maze navigation task that requires memory (Section 4.3). Warp-layers imbue WarpGrad with three powerful properties. First, thanks to the preconditioned gradients, WarpGrad inherits gradient descent properties, importantly guarantees of convergence. Second, warp-layers form a distributed representation of preconditioning that disentangles the expressiveness of the geometry it encodes from the expressive capacity of the task-learner. Third, warp-layers are meta-learned across tasks and trajectories and can therefore capture properties of the task-distribution beyond local information. Figure 3 illustrates these properties in a synthetic scenario, where we construct a family of tasks f: R 2 → R (see Appendix D for details) and meta-learn across the task distribution. WarpGrad learns to produce warped loss surfaces (illustrated on two tasks τ and τ) that are smoother and more well-behaved than their respective native loss-surfaces. If the preconditioning matrix P is invertible, it defines a valid Riemann metric and therefore enjoys similar convergence guarantees to gradient descent. Thus, if warp-layers represent a valid (meta-learned) Riemann metric, WarpGrad is well-behaved. For T-Nets, it is sufficient to require T to be full rank, since T explicitly defines P as a block-diagonal matrix with block entries T T T. In contrast, non-linearity in warp-layers precludes such an explicit identification. Instead, we must consider the geometry that warp-layers represent. For this, we need a metric tensor, G, which is a positive-definite, smoothly varying matrix that measures curvature on a manifold W. The metric tensor defines the steepest direction of descent by −G −1 ∇ L , hence our goal is to establish that warp-layers approximate some G −1. Let Ω represent the effect of warp-layers by a reparameterisation h (i) (x; Ω(θ; φ) ∀x, i that maps from a space P onto the manifold W with γ = Ω(θ; φ). We induce a metric G on W by push-forward (Figure 2): where Provided Ω is not degenerate (G is non-singular), G −1 is positivedefinite, hence a valid Riemann metric. While this is the metric induced on W by warp-layers, it is not the metric used to precondition gradients since we take gradient steps in P which introduces an error term (Figure 2). We can bound the error by first-order Taylor series expansion to establish first-order equivalence between the WarpGrad update in P (Eq. 7) and the ideal update in W (Eq. 8), Consequently, gradient descent under warp-layers (in P-space) is first-order equivalent to warping the native loss surface under a metric G to facilitate task adaptation. Warp parameters φ control the geometry induced by warping, and therefore what task-learners converge to. By meta-learning φ we can accumulate information that is conducive to task adaptation but that may not be available during that process. This suggest that an ideal geometry (in W-space) should yield preconditioning that points in the direction of steepest descent, accounting for global information across tasks, In contrast to MAML-based approaches (Eq. 1), this objective avoids backpropagation through learning processes. Instead, it defines task learning abstractly by introducing a joint distribution over objectives and parameterisations, opening up for general-purpose meta-learning at scale. The canonical objective in Eq. 10 describes a meta-objective for learning a geometry on first principles that we can render into a trajectory-agnostic update rule for warp-layers. To do so, we define a by a task-learnerf that is embedded with a shared WarpGrad optimiser, a meta-training objective L τ meta, and a task adaptation objective L τ task. We use L τ task to adapt task parameters θ and L τ meta to adapt warp parameters φ. Note that we allow meta and task objectives to differ in arbitrary ways, but both are expectations over some data, as above. In the simplest case they differ in terms of validation versus training data, but they may differ in terms of learning paradigm as well, as we demonstrate in continual learning experiment (Section 4.3). To obtain our meta-objective, we recast the canonical objective (Eq. 10) in terms of θ using first-order equivalence of gradient steps (Eq. 9). Next, we factorise p(τ, θ) into p(θ | τ)p(τ). Since p(τ) is given, it remains to consider a sampling strategy for p(θ | τ). For meta-learning of warp-layers, we assume this distribution is given. We later show how to incorporate meta-learning of a prior p(θ 0 | τ). While any sampling strategy is valid, in this paper we exploit that task learning under stochastic gradient descent can be seen as sampling from an empirical prior p(θ | τ) ; in particular, each iterate θ K and sampling such chains defines an empirical distribution p(θ | τ) around some prior p(θ 0 | τ), which we will discuss in Section 2.5. The joint distribution p(τ, θ) defines a joint search space across tasks. Meta-learning therefore learns a geometry over this space with the steepest expected direction of descent. This direction is however not with respect to the objective that produced the gradient, L τ task, but with respect to L τ meta, Decoupling the task gradient operator ∇L τ task from the geometry learned by L τ meta lets us infuse global knowledge in warp-layers, a promising avenue for future research . As an example of this, in Section 4.3 we meta-learn an update-rule that mitigates catastrophic forgetting by defining L τ meta over current and previous tasks seen at any given point of adaptation. In contrast to other gradient-based meta-learners, the WarpGrad meta-objective is defined as a one-step recurrent operator that is meta-learned across a joint search space. This meta-objective is trajectory agnostic and thus compatible with arbitrary task learning processes. It does not suffer from vanishing/exploding gradients nor from the credit assignment problem, but it does rely on second-order gradients, a requirement we can relax by detaching task parameter gradients (∇L where sg is the stop-gradient operator. In contrast to the first-order approximation of MAML , which ignores the entire trajectory except for the final gradient, this approximation retains all gradient terms and only discards local second-order effects, which are typically dominated by first-order effect in long parameter trajectories . Empirically, we find that our approximation only incurs a minor loss of performance in an ablation study on Omniglot (Appendix F). Interestingly, this approximation is a form of multitask learning with respect to φ (; ;) that marginalises over task parameters θ τ. WarpGrad is a method for learning warp layer parameters φ over a joint search space defined by p(τ, θ). Because WarpGrad takes this distribution as given, we can integrate WarpGrad with methods that define or learn some form of "prior" p(θ 0 | τ) over θ τ 0. For instance, (a) Multi-task solution: in online learning, we can alternate between updating a multi-task solution and tuning warp parameters. We use this approach in our Reinforcement Learning experiment (Section 4.3); (b) Meta-learned point-estimate: when task adaptation occurs in batch mode, we can meta-learn a shared initialisation θ 0. Our few-shot and supervised learning experiments take this approach (Section 4.1); (c) Metalearned prior: WarpGrad can be combined with Bayesian methods that define a full prior (; ; ;). We incorporate such methods by some objective C (potentially vacuous) over θ 0 that we optimise jointly with WarpGrad, where L can be substituted byL and λ ∈ [0, ∞) is a hyper-parameter. We train the WarpGrad optimiser via stochastic gradient descent. We solve Eq. 13 by alternating between sampling task parameters from p(τ, θ) given the current parameter values for φ and taking meta-gradient steps over these samples to update φ. As such, our method can also be seen as a generalised form of gradient descent in the form of Mirror Descent with a meta-learned dual space . The details of the sampling procedure may vary depending on the specifics of the tasks (static, sequential), the design of the task-learner (feed-forward, recurrent), and the learning objective (supervised, self-supervised, reinforcement learning). In Algorithm 1 we illustrate a simple online algorithm with constant memory and linear complexity in K, assuming the same holds for C. A drawback of this approach is that it is relatively data inefficient; in Appendix B we detail a more complex offline training algorithm that stores task parameters in a replay buffer for mini-batched training of φ. The gains of the offline variant can be dramatic: in our Omniglot experiment (Section 4.1), offline meta-training allows us to update warp parameters 2000 times with each meta-batch, improving final test accuracy from 76.3% to 84.3% (Appendix F). Learning to learn, or meta-learning, has previously been explored in a variety of settings. Early work focused on evolutionary approaches (; ;). introduced gradient descent methods to meta-learning, specifically for recurrent meta-learning algorithms, extended to RL by and. A similar approach was taken by and to meta-learn a parameterised update rule in the form of a Recurrent Neural Network (RNN). A related idea is to separate parameters into "slow" and "fast" weights, where the latter captures meta-information and the former encapsulates rapid adaptation (; ;). This can be implemented by embedding a neural network that dynamically adapt the parameters of a main architecture . WarpGrad can be seen as learning slow warp-parameters that precondition adaptation of fast task parameters. Recent meta-learning has focused almost exclusively on the special case of few-shot learning, where tasks are characterised by severe data scarcity. In this settings, tasks must be relatively similar, such that a single or handful of examples are sufficient to learn the task at hand (; ; ;). Several meta-learners have been proposed that directly predict the parameters of the task-learner (; ; ;). To scale, such methods typically pretrain a feature extractor and predict a small subset of the parameters. Closely related to our work are gradient-based few-shot learning methods that extend MAML by sharing some subset of parameters between task-learners that is fixed during task training but meta-learner across tasks, which may reduce overfitting (; ;) or induce more robust convergence . It can also be used to model latent variables for concept or task inference, which implicitly induce gradient modulation (; ; ;). Our work is also related to gradient-based meta-learning of a shared initialisation that scales beyond few-shot learning . Require: p(τ): distribution over tasks Require: α, β, λ: hyper-parameters 1: initialise φ and p(θ 0 | τ) 2: while not done do sample mini-batch of task T from p(τ) 4: for all τ ∈ T do 6: for all k in 0,..., K τ −1 do 8: 10: end for sample mini-batch of task T from p(τ) for all τ ∈ T do 6: end for 12: end for for all (τ, k) ∈ B do 15: 16: end for 24: end while Meta-learned preconditioning is closely related to parallel work on second-order optimisation methods for high dimensional non-convex loss surfaces (; ; ;). In this setting, second-order optimisers typically struggle to improve upon first-order baselines . As second-order curvature is typically intractable to compute, such methods resort to low-rank approximations (; ;) and suffer from instability . In particular, Natural Gradient Descent is a method that uses the Fisher Information Matrix as curvature metric . Several proposed methods for amortising the cost of estimating the metric (; ;). As noted by , expressing preconditioning through interleaved projections can be seen as a form of Mirror Descent . WarpGrad offers a new perspective on gradient preconditioning by introducing a generic form of model-embedded preconditioning that exploits global information beyond the task at hand. Leap 73.9 ± 2.2 75.5 ± 2.6 Warp-Leap 80.4 ± 1.6 83.6 ± 1.9 We evaluate WarpGrad in a set of experiments designed to answer three questions: do WarpGrad methods retain the inductive bias of MAML-based fewshot learners? Can WarpGrad methods scale to problems beyond the reach of such methods? Can WarpGrad generalise to complex meta-learning problems? For few-shot learning, we test whether WarpGrad retains the inductive bias of gradient-based meta-learners while avoiding backpropagation through the gradient descent process. To isolate the effect of the WarpGrad objective, we use linear warp-layers that we train using online meta-training (Algorithm 1) to make WarpGrad as close to T-Nets as possible. For a fair comparison, we meta-learn the initialisation using All task-learners use a convolutional architecture that stacks 4 blocks made up of a 3 × 3 convolution, max-pooling, batch-norm, and ReLU activation. We define Warp-MAML by inserting warp-layers in the form of 3 × 3 convolutions after each block in the baseline task-learner. All baselines are tuned with identical and independent hyper-parameter searches (including filter sizes -full experimental settings in Appendix H), and we report best from our experiments or the literature. Warp-MAML outperforms all baselines (Table 1), improving 1-and 5-shot accuracy by 3.6 and 5.5 percentage points on miniImageNet and by 5.2 and 3.8 percentage points on tieredImageNet , which indicates that WarpGrad retains the inductive bias of MAML-based meta-learners. Next, we evaluate whether WarpGrad can scale beyond few-shot adaptation on similar supervised problems. We propose a new protocol for tieredImageNet that increases the number of adaptation Omniglot test accuracies on held-out tasks after meta-training on a varying number of tasks. Shading represents standard deviation across 10 independent runs. We compare WarpLeap, Leap, and Reptile, multi-headed finetuning, as well as SGD and KFAC which used random initialisation but with 10x larger batch size and learning rate. Right: On a RL maze navigation task, mean cumulative return is shown. Shading represents inter-quartile ranges across 10 independent runs. † Simple modulation and ‡ retroactive modulation are used . steps to 640 and use 6 convolutional blocks in task-learners, which are otherwise defined as above. Since MAML-based approaches cannot backpropagate through 640 adaptation steps for models of this size, we evaluate WarpGrad against two gradient-based meta-learners that meta-learn an initialisation without such backpropagation, Reptile and Leap , and we define a Warp-Leap meta-learner by Leap is an attractive complement as it minimises the expected gradient descent trajectory length across tasks. Under WarpGrad, this becomes a joint search for a geometry in which task adaptation defines geodesics (shortest paths, see Appendix C for details). While Reptile outperforms Leap by 2.6 percentage points on this benchmark, Warp-Leap surpasses both, with a margin of 3.88 to Reptile (Table 1). We further evaluate Warp-Leap on the multi-shot Omniglot protocol proposed by , where each of the 50 alphabets is a 20-way classification task. Task adaptation involves 100 gradient steps on random samples that are preprocessed by random affine transformations. We report for Warp-Leap under offline meta-training (Algorithm 2), which updates warp parameters 2000 times per meta step (see Appendix E for experimental details). WarpLeap enjoys similar performance on this task as well, improving over Leap and Reptile by 8.1 and 12.8 points respectively (Table 1). We also perform an extensive ablation study varying the number of tasks in the meta-training set. Except for the case of a single task, Warp-Leap substantially outperforms all baselines (Figure 4), achieving a higher rate of convergence and reducing the final test error from~30% to~15%. Non-linear warps, which go beyond block-diagonal preconditioning, reach~11% test error (refer to Appendix F and Table 2 for the full ). Finally, we find that WarpGrad methods behave distinctly different from Natural Gradient Descent methods in an ablation study (Appendix G). It reduce final test error from~42% to~19%, controlling for initialisation, while its preconditioning matrices differ from what the literature suggests . (c.1) Reinforcement Learning To illustrate how WarpGrad may be used both with recurrent neural networks and in meta-reinforcement learning, we evaluate it in a maze navigation task proposed by. The environment is a fixed maze and a task is defined by randomly choosing a goal location. The agent's objective is to find the location as many times as possible, being teleported to a random location each time it finds it. We use advantage actor-critic with a basic recurrent neural network as the task-learner, and we design a Warp-RNN as a HyperNetwork that uses an LSTM that is fixed during task training. This LSTM modulates the weights of the task-learning RNN (defined in Appendix I), which in turn is trained on mini-batches of 30 episodes for 200 000 steps. We accumulate the gradient of fixed warp-parameters continually (Algorithm 3, Appendix B) at each task parameter update. Warp parameters are updated on every 30 th step on task parameters (we control for meta-LSTM capacity in Appendix I). We compare against Learning to Reinforcement Learn and Hebbian meta-learning (; 2019); see Appendix I for details. Notably, linear warps (T-Nets) do worse than the baseline RNN on this task while the Warp-RNN converges to a mean cumulative reward of~160 in 60 000 Figure 5: Continual learning regression experiment. Average log-loss over 100 randomly sampled tasks. Each task contains 5 sub-tasks learned (a) sequentially as seen during meta-training or (b) in random order [sub-task 1, 3, 4, 2, 0]. We train on each sub-task for 20 steps, for a total of K = 100 task adaptation steps. episodes, compared to baselines that reach at most a mean cumulative reward of~125 after 100 000 episodes (Figure 4), reaching~150 after 200 000 episodes (I). (c.2) Continual Learning We test if WarpGrad can prevent catastrophic forgetting in a continual learning scenario. To this end, we design a continual learning version of the sine regression meta-learning experiment in by splitting the input interval [−5, 5] ⊂ R into 5 consecutive sub-tasks (an alternative protocol was recently proposed independently by). Each sub-task is a regression problem with the target being a mixture of two random sine waves; for each task, we train a 4-layer feed-forward task-learner with interleaved warp-layers incrementally on one sub-task at a time (see Appendix J for details). To isolate the behaviour of WarpGrad parameters, we use a fixed random initialisation for each task sequence. Warp parameters are meta-learned to prevent catastrophic forgetting by defining L τ meta to be the average task loss over current and previous sub-tasks, for each sub-task in a task sequence. This forces warp-parameters to disentangle the adaptation process of current and previous sub-tasks. Evaluating our WarpGrad optimiser on 100 random tasks, we find that it learns new sub-tasks well, with mean losses on a 10 −3 magnitude. When switching tasks, performance immediately deteriorates to an order of magnitude 10 −2, where it remains for the remainder of training (Figure 5). These indicate that WarpGrad can be an effective mechanism against catastrophic forgetting, a promising avenue for further research. For detailed , see Appendix J. We propose WarpGrad, a novel meta-learner that combines the expressive capacity and flexibility of memory-based meta-learners with the inductive bias of gradient-based meta-learners. WarpGrad meta-learns to precondition gradients during task adaptation without backpropagating through the adaptation process and we find empirically that it retains the inductive bias of MAML-based few-shot learners while being able to scale to complex problems and architectures. Further, by expressing preconditioning through warp-layers that are universal function approximators, WarpGrad is able to express geometries beyond the block-diagonal structure of prior works. WarpGrad provides a principled framework for general-purpose meta-learning that integrates learning paradigms, such as continual learning, an exciting avenue for future research. We introduce novel means for preconditioning, for instance with residual and recurrent warp-layers. Understanding how WarpGrad manifolds relate to second-order optimisation methods will further our understanding of gradient-based meta-learning and aid us in designing warp-layers with stronger inductive bias. In their current form, WarpGrad methods share some of the limitations of many popular metalearning approaches. While WarpGrad is designed to avoid backpropagating through the task training process, as in Warp-Leap, the WarpGrad objective samples from parameter trajectories and has therefore linear computational complexity in the number of adaptation steps, currently an unresolved limitation of gradient-based meta-learning. Our offline algorithm (Algorithm 2) hints at exciting possibilities for overcoming this limitation. WarpGrad is a model-embedded meta-learned optimiser that allows for a number of implementation strategies. Indeed, there is a number of ways warp-layers can be embedded in an architecture of choice. To embed warp-layers given a task-learner architecture, we may either insert new warp-layers in the given architecture or designate some layers as warp-layers and some as task layers. We found that WarpGrad can both be used in a high-capacity mode, where task-learners are relatively weak to avoid overfitting, as well as in a low-capacity mode where task-learners are powerful and warp-layers are relatively weak. The best approach depends on the problem at hand. We highlight three approaches to designing WarpGrad optimisers, starting from a given architecture: (a) Model partitioning. Given a desired architecture, designate some operations as task-adaptable and the rest as warp-layers. Task layers do not have to interleave exactly with warp-layers as gradient warping arises both through the forward pass and through backpropagation. This was how we approached the tieredImageNet and miniImageNet experiments. (b) Model augmentation. Given a model, designate all layers as task-adaptable and interleave warplayers. Warp-layers can be relatively weak as backpropagation through non-linear activations ensures expressive gradient warping. This was our approach to the Omniglot experiment; our main architecture interleaves linear warp-layers in a standard architecture. (c) Information compression. Given a model, designate all layers as warp and interleave weak task layers. In this scenario, task-learners are prone to overfitting. Pushing capacity into the warp allows it to encode general information the task-learner can draw on during task adaptation. This approach is similar to approaches in transfer and meta-learning that restrict the number of free parameters during task training (; ; Figure 6 illustrates this process. In this section, we provide the variants of WarpGrad training algorithms used in this paper. Algorithm 1 describes a simple online algorithm, which accumulates meta-gradients online during task adaptation. This algorithm has constant memory and scales linearly in the length of task trajectories. In Algorithm 2, we describe an offline meta-training algorithm. This algorithm is similar to Algorithm 1 in many respects, but differs in that we do not compute meta-gradients online during task adaptation. Instead, we accumulate them into a replay buffer of sampled task parameterisations. This buffer is a Monte-Carlo sample of the expectation in the meta objective (Eq. 13) that can be thought of as a dataset in its own right. Hence, we can apply standard mini-batching with respect to the buffer and perform mini-batch gradient descent on warp parameters. This allows us to update warp parameters several times for a given sample of task parameter trajectories, which can greatly improve data efficiency. In our Omniglot experiment, we found offline meta-training to converge faster: in fact, a mini-batch size of 1 (i.e. η = 1 in Algorithm 2 converges rapidly without any instability. Finally, in Algorithm 3, we present a continual meta-training process where meta-training occurs throughout a stream of learning experiences. Here, C represents a multi-task objective, such as the average task loss, task . Meta-learning arises by collecting experiences continuously (across different tasks) and using these to accumulate the meta-gradient online. Warp parameters are updated intermittently with the accumulated meta-gradient. We use this algorithm in our maze navigation experiment, where task adaptation is internalised within the RNN tasklearner. In this section, we detail WarpGrad methods used in our experiments. Warp-MAML We use this model for few-shot learning (Section 4.1). We use the full warpobjective in Eq. 11 together with the MAML objective (Eq. 1), where C MAML = L MAML under the constraint P = I. In our experiments, we trained Warp-MAML using the online training algorithm (Algorithm 1). Warp-Leap We use this model for multi-shot meta-learning. It is defined by applying Leap to θ 0 (Eq. 16), where the Leap objective is defined by minimising the expected cumulative chordal distance, Note that the Leap meta-gradient makes a first-order approximation to avoid backpropagating through the adaptation process. It is given by where In our experiments, we train Warp-Leap using Algorithm 1 in the multi-shot tieredImageNet experiment and Algorithm 2 in the Omniglot experiment. We perform an ablation study for training algorithms, comparing exact (Eq. 11) versus approximate (Eq. 12) meta-objectives, and several implementations of the warp-layers on Omniglot in Appendix F. Warp-RNN For our Reinforcement Learning experiment, we define a WarpGrad optimiser by meta-learning an LSTM that modulates the weights of the task-learner (see Appendix I for details). For this algorithm, we face a continuous stream of experiences (episodes) that we meta-learn over using our continual meta-training algorithm (Algorithm 3). In our experiment, both L τ task and L τ meta are the advantage actor-critic objective ; C is computed on one batch of 30 episodes, whereas L is accumulated over η = 30 such batches, for a total of 900 episodes. As each episode involves 300 steps in the environment, we cannot apply the exact meta objective, but use the approximate meta objective (Eq. 12). Specifically, let E τ = {s 0, a 1, r 1, s 1, . . ., s T, a T, r T, s T +1} denote an episode on task τ, where s denotes state, a action, and r instantaneous reward. Denote a mini-batch of randomly sampled task episodes by E = {E τ} τ ∼p(τ) and an ordered set of k consecutive mini-batches by Sample mini-batch of task B from p(τ) g φ, g θ0 ← 0 for all τ ∈ B do 6: for all k in 0,..., K τ −1 do 8: g φ ← g φ + ∇L(φ; θ Sample mini-batch of task B from p(τ) for all τ ∈ B do 6: g θ ← g θ + ∇C(θ; φ) Sample mini-batch of task B from p(τ) T ← {τ : for all τ ∈ B do 6: for all k in 0, . . ., K τ −1 do 8: T 16: φ ← φ − βg φ 20: i, g φ, g θ0 ← 0 end if end while 24: end while and C multi (θ;, where n and n are normalising constants. The Warp-RNN objective is defined by WarpGrad for Continual Learning For this experiment, we focus on meta-learning warpparameters. Hence, the initialisation for each task sequence is a fixed random initialisation, (i.e. λC(θ 0) = 0). For the warp meta-objective, we take expectations over N task sequences, where each task sequence is a sequence of T = 5 sub-tasks that the task-learner observes one at a time; thus while the task loss is defined over the current sub-task, the meta-loss averages of the current and all prior sub-tasks, for each sub-task in the sequence. See Appendix J for detailed definitions. Importantly, because WarpGrad defines task adaptation abstractly by a probability distribution, we can readily implement a continual learning objective by modifying the joint task parameter distribution p(τ, θ) that we use in the meta-objective (Eq. 11). A task defines a sequence of sub-tasks over which we generate parameter trajectories θ τ. Thus, the only difference from multi-task meta-learning is that parameter trajectories are not generated under a fixed task, but arise as a function of the continual learning algorithm used for adaptation. We define the conditional distribution p(θ | τ) as before by sampling sub-task parameters θ τt from a mini-batch of such task trajectories, keeping track of which sub-task t it belongs to and which sub-tasks came before it in the given task sequence τ. The meta-objective is constructed, for any sub-task parameterisation θ τt, as, where D j is data from sub-task j (Appendix J). The meta-objective is an expectation over task parameterisations, To build intuition for what it means to warp space, we construct a simple 2-D problem over loss surfaces. A learner is faced with the task of minimising an objective function of the form where each task f τ is defined by scale and rotation functions g τ that are randomly sampled from a predefined distribution. Specifically, each task is defined by the objective function The task is to minimise the given objective from a randomly sampled initialisation, x {i=1,2} ∼ U (−3, 3). During meta-training, we train on a task for 100 steps using a learning rate of 0.1. Each task has a unique loss-surface that the learner traverses from the randomly sampled initialisation. While each loss-surface is unique, they share an underlying structure. Thus, by metalearning a warp over trajectories on randomly sampled loss surfaces, we expect WarpGrad to learn a warp that is close to invariant to spurious descent directions. In particular, WarpGrad should produce a smooth warped space that is quasi-convex for any given task to ensure that the task-learner finds a minimum as fast as possible regardless of initialisation. To visualise the geometry, we use an explicit warp Ω defined by a 2-layer feed-forward network with a hidden-state size of 30 and tanh non-linearities. We train warp parameters for 100 meta-training steps; Figure 7: Example trajectories on three task loss surfaces. We start Gradient Descent (black) and WarpGrad (magenta) from the same initialisation; while SGD struggles with the curvature, the WarpGrad optimiser has learned a warp such that gradient descent in the representation space (top) leads to rapid convergence in model parameter space (bottom). in each meta-step we sample a new task surface and a mini-batch of 10 random initialisations that we train separately. We train to convergence and accumulate the warp meta-gradient online (Algorithm 1). We evaluate against gradient descent on randomly sampled loss surfaces (Figure 7). Both optimisers start from the same initialisation, chosen such that standard gradient descent struggles; we expect the WarpGrad optimisers to learn a geometry that is robust to the initialisation (top row). This is indeed what we find; the geometry learned by WarpGrad smoothly warps the native loss surface into a well-behaved space where gradient descent converges to a local minimum. We follow the protocol of , including choice of hyper-parameters. In this setup, each of the 50 alphabets that comprise the dataset constitutes a distinct task. Each task is treated as a 20-way classification problem. Four alphabets have fewer than 20 characters in the alphabet and are discarded, leaving us with 46 alphabets in total. 10 alphabets are held-out for final meta-testing; which alphabets are held out depend on the seed to account for variations across alphabets; we train and evaluate all baselines on 10 seeds. For each character in an alphabet, there are 20 raw samples. Of these, 5 are held out for final evaluation on the task while the remainder is used to construct a training set. Raw samples are pre-processed by random affine transformations in the form of (a) scaling between [0.8, 1.2], (b) rotation, and (c) cropping height and width by a factor of [−0.2, 0.2] in each dimension. This ensures tasks are too hard for few-shot learning. During task adaptation, mini-batches are sampled at random without ensuring class-balance (in contrast to few-shot classification protocols ). Note that benchmarks under this protocol are not compatible with few-shot learning benchmarks. We use the same convolutional neural network architecture and hyper-parameters as in. This learner stacks a convolutional block comprised of a 3 × 3 convolution with 64 filters, followed by 2 × 2 max-pooling, batch-normalisation, and ReLU activation, four times. All images are down-sampled to 28 × 28, ing in a 1 × 1 × 64 feature map that is passed on to a final linear layer. We create a Warp Leap meta-learner that inserts warp-layers between each convolutional block, where each h is defined as above. In our main experiment, each ω i is simply a 3 × 3 convolutional layer with zero padding; in Appendix F we consider both simpler and more sophisticated versions. We find that relatively simple warp-layers do quite well. However, adding capacity does improve generalisation performance. We meta-learn the initialisation of task parameters using the Leap objective (Eq. 16), detailed in Appendix C. Both L τ meta and L τ task are defined as the negative log-likelihood loss; importantly, we evaluate them on different batches of task data to ensure warp-layers encourage generalisation. We found no additional benefit in this experiment from using held-out data to evaluate L τ meta. We use the offline meta-training algorithm (Appendix B, Algorithm 2); in particular, during meta-training, we sample mini-batches Top: test accuracies on held-out tasks after meta-training on a varying number of tasks. Bottom: AUC under accuracy curve on held-out tasks after meta-training on a varying number of tasks. Shading represents standard deviation across 10 independent runs. We compare between Warp-Leap, Leap, and Reptile, multi-headed finetuning, as well as SGD and KFAC which used random initialisation but with 10x larger batch size and learning rate. of 20 tasks and train task-learners for 100 steps to collect 2000 task parameterisations into a replay buffer. Task-learners share a common initialisation and warp parameters that are held fixed during task adaptation. Once collected, we iterate over the buffer by randomly sampling mini-batches of task parameterisations without replacement. Unless otherwise noted, we used a batch size of η = 1. For each mini-batch, we update φ by applying gradient descent under the canonical meta-objective (Eq. 11), where we evaluate L τ meta on a randomly sampled mini-batch of data from the corresponding task. Consequently, for each meta-batch, we take (up to) 2000 meta-gradient steps on warp parameters φ. We find that this form of mini-batching causes the meta-training loop to converge much faster and induces no discernible instability. We compare Warp-Leap against no meta-learning with standard gradient descent (SGD) or KFAC . We also benchmark against baselines provided in; Leap, Reptile , MAML, and multi-headed fine-tuning. All learners benefit substantially from large batch sizes as this enables higher learning rates. To render no-pretraining a competitive option within a fair computational budget, we allow SGD and KFAC to use 10x larger batch sizes, enabling 10x larger learning rates. This renders them computationally costly, taking 2x and 4x longer to train on a given task during meta-test time than Warp-Leap, respectively. No. Meta-training tasks 1 49.5 ± 7.8 37.6 ± 4.8 40.4 ± 4.0 53.8 ± 5.0 40.0 ± 2.6 56.0 51.0 3 68.8 ± 2.8 53.4 ± 3.1 53.1 ± 4.2 64.6 ± 3.3 48.6 ± 2.5 56.0 51.0 5 75.0 ± 3.6 59.5 ± 3.7 58.3 ± 3.3 67.7 ± 2.8 51.6 ± 3.8 56.0 51.0 10 81.2 ± 2.4 67.4 ± 2.4 65.0 ± 2.1 71.3 ± 2.0 54.1 ± 2.8 56.0 51.0 15 82.7 ± 3.3 70.0 ± 2.4 66.6 ± 2.9 73.5 ± 2.4 54.8 ± 3.4 56.0 51.0 20 82.0 ± 2.6 73.3 ± 2.3 69.4 ± 3.4 75.4 ± 3.2 56.6 ± 2.0 56.0 51.0 25 83.8 ± 1.9 74.8 ± 2.7 70.8 ± 1.9 76.4 ± 2.2 56.7 ± 2.1 56.0 51.0 F ABLATION STUDY: WARP LAYERS, META-OBJECTIVE, AND META-TRAINING WarpGrad provides a principled approach for model-informed meta-learning and offers several degrees of freedom. To evaluate these design choices, we conduct an ablation study on Warp-Leap where we vary the design of warp-layers as well as meta-training approach. For the ablation study, we fixed the number of pretraining tasks to 25 and report final test accuracy over 4 independent runs. All ablations use the same hyper-parameters, except for online meta-training which uses a learning rate of 0.001. First, we vary the meta-training protocol by (a) using the approximate objective (Eq. 12), (b) using online meta-training (Algorithm 1), and (c) whether meta-learning the learning rate used for task adaptation is beneficial in this experiment. We meta-learn a single scalar learning rate (as warp parameters can learn layer-wise scaling). Meta-gradients for the learning rate are clipped at 0.001 and we use a learning rate of 0.001. Note that when using offline meta-training, we store both task parameterisations and the momentum buffer in that phase and use them in the update rule when computing the canonical objective (Eq. 11). Further, we vary the architecture used for warp-layers. We study simpler versions that use channelwise scaling and more complex versions that use non-linearities and residual connections. We also evaluate a version where each warp-layer has two stacked convolutions, where the first warp convolution outputs 128 filters and the second warp convolution outputs 64 filters. Finally, in the two-layer warp-architecture, we evaluate a version that inserts a FiLM layer between the two warp convolutions. These are adapted during task training from a 0 initialisation; they amount to task embeddings that condition gradient warping on task statistics. Full are reported in Table 3. G ABLATION STUDY: WARPGRAD AND NATURAL GRADIENT DESCENT and Natural Neural Nets . First, we isolate the effect of warping task loss surfaces by fixing a random initialisation and only meta-learning warp parameters. That is, in this experiment, we set λC(θ 0) = 0. We compare against two baselines, stochastic gradient descent (SGD) and KFAC, both trained from a random initialisation. We use task mini-batch sizes of 200 and task learning rates of 1.0, otherwise we use the same hyper-parameters as in the main experiment. For WarpGrad, we meta-train with these hyper-parameters as well. We evaluate two WarpGrad architectures, in one, we use linear warp-layers, which gives a block-diagonal preconditioning, as in KFAC. In the other, we use our most expressive warp configuration from the ablation experiment in appendix F, where warp-layers are two-layer convolutional block with residual connections, batch normalisation, and ReLU activation. We find that warped geometries facilitate task adaptation on held-out tasks to a greater degree than either SGD or KFAC by a significant margin (table 4). We further find that going beyond block-diagonal preconditioning yields a significant improvement in performance. Second, we explore whether the geometry that we meta-learn under in the full Warp-Leap algorithm is approximately Fisher. In this experiment we use the main Warp-Leap architecture. We use a meta-learner trained on 25 tasks and that we evaluate on 10 held-out tasks. Because warp-layers are linear in this configuration, if the learned geometry is approximately Fisher, post-warp activations should be zero-centred and the layer-wise covariance matrix should satisfy where I is the identify matrix . If true, Warp-Leap would learn a block-diagonal approximation to the Inverse Fisher Matrix, as Natural Neural Nets. To test this, during task adaptation on held-out tasks, we compute the mean activation in each convolutional layer pre-and post-warping. We also compute the Shatten-1 norm of the difference between layer activation covariance and the identity matrix pre-and post-warping, as described above. We average statistics over task and adaptation step (we found no significant variation in these dimensions). Figure 9 summarise our . We find that, in general, WarpGrad-Leap has zero-centered post-warp activations. That pre-warp activations are positive is an artefact of the ReLU activation function. However, we find that the correlation structure is significantly different from what we would expect if Warp-Leap were to represent the Fisher matrix; post-warp covariances are significantly dissimilar from the identity matrix and varies across layers. These indicate that WarpGrad methods behave distinctly different from Natural Gradient Descent methods. One possibility is that WarpGrad methods do approximate the Fisher Information Matrix, but with higher accuracy than other methods. A more likely explanation is that WarpGrad methods encode a different geometry since they can learn to leverage global information beyond the task at hand, which enables them to express geometries that standard Natural Gradient Descent cannot. H miniIMAGENET AND tieredIMAGENET miniImageNet This dataset is a subset of 100 classes sampled randomly from the 1000 base classes in the ILSVRC-12 training set, with 600 images for each class. Following , classes are split into non-overlapping meta-training, meta-validation and meta-tests sets with 64, 16, and 20 classes in each respectively. tieredImageNet As described in , this dataset is a subset of ILSVRC-12 that stratifies 608 classes into 34 higher-level categories in the ImageNet human-curated hierarchy . In order to increase the separation between meta-train and meta-evaluation splits, 20 of these categories are used for meta-training, while 6 and 8 are used for meta-validation and metatesting respectively. Slicing the class hierarchy closer to the root creates more similarity within each split, and correspondingly more diversity between splits, rendering the meta-learning problem more challenging. High-level categories are further divided into 351 classes used for meta-training, 97 for meta-validation and 160 for meta-testing, for a total of 608 base categories. All the training images in ILSVRC-12 for these base classes are used to generate problem instances for tieredImageNet, of which there are a minimum of 732 and a maximum of 1300 images per class. For all experiments, N -way K-shot classification problem instances were sampled following the standard image classification methodology for meta-learning proposed in. A subset of N classes was sampled at random from the corresponding split. For each class, K arbitrary images were chosen without replacement to form the training dataset of that problem instance. As usual, a disjoint set of L images per class were selected for the validation set. Few-shot classification In these experiments we used the established experimental protocol for evaluation in meta-validation and meta-testing: 600 task instances were selected, all using N = 5, K = 1 or K = 5, as specified, and L = 15. During meta-training we used N = 5, K = 5 or K = 15 respectively, and L = 15. Task-learners used 4 convolutional blocks defined by with 128 filters (or less, chosen by hyperparameter tuning), 3 × 3 kernels and strides set to 1, followed by batch normalisation with learned scales and offsets, a ReLU non-linearity and 2 × 2 max-pooling. The output of the convolutional stack (5 × 5 × 128) was flattened and mapped, using a linear layer, to the 5 output units. The last 3 convolutional layers were followed by warp-layers with 128 filters each. Only the final 3 task-layer parameters and their corresponding scale and offset batch-norm parameters were adapted during task-training, with the corresponding warp-layers and the initial convolutional layer kept fixed and meta-learned using the WarpGrad objective. Note that, with the exception of CAVIA, other baselines do worse with 128 filters as they overfit; MAML and T-Nets achieve 46% and 49 % 5-way-1-shot test accuracy with 128 filters, compared to their best reported (48.7% and 51.7%, respectively). Hyper-parameters were tuned independently for each condition using random grid search for highest test accuracy on meta-validation left-out tasks. Grid sizes were 50 for all experiments. We choose the optimal hyper-parameters (using early stopping at the meta-level) in terms of meta-validation test set accuracy for each condition and we report test accuracy on the meta-test set of tasks. 60000 metatraining steps were performed using meta-gradients over a single randomly selected task instances and their entire trajectories of 5 adaptation steps. Task-specific adaptation was done using stochastic gradient descent without momentum. We use Adam for meta-updates. Multi-shot classification For these experiments we used N = 10, K = 640 and L = 50. Tasklearners are defined similarly, but stacking 6 convolutional blocks defined by 3 × 3 kernels and strides set to 1, followed by batch normalisation with learned scales and offsets, a ReLU non-linearity and 2 × 2 max-pooling (first 5 layers). The sizes of convolutional layers were chosen by hyper-parameter tuning to {64, 64, 160, 160, 256, 256}. The output of the convolutional stack (2 × 2 × 256) was flattened and mapped, using a linear layer, to the 10 output units. Hyper-parameters were tuned independently for each algorithm, version and baseline using random grid search for highest test accuracy on meta-validation left-out tasks. Grid sizes were 200 for all multi-shot experiments. We choose the optimal hyper-parameters in terms of mean meta-validation test set accuracy AUC (using early stopping at the meta-level) for each condition and we report test accuracy on the meta-test set of tasks. 2000 meta-training steps were performed using averaged meta-gradients over 5 random task instances and their entire trajectories of 100 adaptation steps with batch size 64, or inner-loops. Task-specific adaptation was done using stochastic gradient descent with momentum (0.9). Meta-gradients were passed to Adam in the outer loop. We test WarpGrad against Leap, Reptile, and training from scratch with large batches and tuned momentum. We tune all meta-learners for optimal performance on the validation set. WarpGrad outperforms all baselines both in terms of rate of convergence and final test performance (Figure 10). To illustrate both how WarpGrad may be used with Recurrent Neural Networks in an online metalearning setting, as well as in a Reinforcement Learning environment, we evaluate it in a maze navigation task proposed by. The environment is a fixed maze and a task is defined by randomly choosing a goal location in the maze. During a task episode of length 200, the goal location is fixed but the agent gets teleported once it finds it. Thus, during an episode the agent must first locate the goal, then return to it as many times as possible, each time being randomly teleported to a new starting location. We use an identical setup as , except our grid is of size 11 × 11 as opposed to 9 × 9. We compare our Warp-RNN to a Learning to Reinforcement Learn and Hebbian meta-learners (; 2019). The task-learner in all cases is an advantage actor-critic , where the actor and critic share an underlying basic RNN, whose hidden state is projected into a policy and value function by two separate linear layers. The RNN has a hidden state size of 100 and tanh non-linearities. Following , for all benchmarks, we train the task-learner using Adam with a learning rate of 1e − 3 for 200 000 steps using batches of 30 episodes, each of length 200. Metalearning arises in this setting as each episode encodes a different task, as the goal location moves, and by learning across episodes the RNN is encoding meta-information in its parameters that it can leverage during task adaptation (via its hidden state ). for further details. We design a Warp-RNN by introducing a warp-layer in the form of an LSTM that is frozen for most of the training process. , we use this meta-LSTM to modulate the task RNN. Given an episode with input vector x t, the task RNN is defined by where W, V, b are task-adaptable parameters; each U (i) j,t is a diagonal warp matrix produced by projecting from the hidden state of the meta-LSTM, U Figure 6, Appendix A). Because the meta-LSTM is frozen for most of the training process, task adaptable parameters correspond to those of the baseline RNN. To control for the capacity of the meta-LSTM, we also train a HyperRNN where the LSTM is updated with every task adaptation; we find this model does worse than the WarpGrad-RNN. We also compare the non-linear preconditioning that we obtain in our Warp-RNN to linear forms of preconditioning defined in prior works. We implement a T-Nets-RNN meta-learner, defined by embedding linear projections T h, T x and T b that are meta-learned in the task RNN, h t = tanh(T h V h t + T x W x t + b). Note that we cannot backpropagate to these meta-parameters as per the T-Nets (MAML) framework. Instead, we train T h, T x, T b with the meta-objective and meta-training algorithm we use for the Warp-RNN. The T-Nets-RNN does worse than the baseline RNN and generally fails to learn. We meta-train the Warp-RNN using the continual meta-training algorithm (Algorithm 3, see Appendix B for details), which accumulates meta-gradients continuously during training. Because task training is a continuous stream of batches of episodes, we accumulating the meta-gradient using the approximate objective (Eq. 12, where L τ task and L τ meta are both the same advantage actor-critic objective) and update warp-parameters on every 30th task parameter update. We detail the meta-objective in Appendix C (see Eq. 18). Our implementation of a Warp-RNN can be seen as meta-learning "slow" weights to facilitate learning of "fast" weights . Implementing Warp-RNN requires four lines of code on top of the standard training script. The task-learner is the same in all experiments with the same number of learnable parameters and hidden state size. Compared to all baselines, we find that the Warp-RNN converges faster and achieves a higher cumulative reward (Figure 4 and Figure 11). J META-LEARNING FOR CONTINUAL LEARNING Online SGD and related optimisation methods tend to adapt neural network models to the data distribution encountered last during training, usually leading to what has been termed "catastrophic meta-learned). Each non-linearity is followed by a residual warping block consisting of 2-layer feedforward networks with 100 hidden units and tanh non-linearities, with meta-learned parameters φ which are fixed during the task adaptation process. Continual learning as task adaptation The task target function g τ is partition into 5 sets of subtask. The task-learner sees one partition at a time and is given n = 20 gradient steps to adapt, for a total of K = 100 steps of online gradient descent updates for the full task sequence; recall that every such sequence starts from a fixed random initialisation θ 0. The adaptation is completely online since at step k = 1,..., K we sample a new mini-batch D k task of 5 samples from a single sub-task (sub-interval). The data distribution changes after each n = 20 steps with inputs x coming from the next sub-interval and targets form the same function g τ (x). During meta-training we always present tasks in the same order, presenting intervals from left to right. The online (sub-)task loss is defined on the current mini-batch D k task at step k: 2. Adaptation to each sub-task uses sub-task data only to form task parameter updates θ We used a constant learning rate α = 0.001. Warp-parameters φ are fixed across the full task sequence during adaptation and are meta-learned across random samples of task sequences, which we describe next. Meta-learning an optimiser for continual learning To investigate the ability of WarpGrad to learn an optimiser for continual learning that mitigates catastrophic forgetting, we fix a random initialisation prior to meta-training that is not meta-learned; every task-learner is initialised with these parameters. To meta-learn an optimiser for continual learning, we need a meta-objective that encourages such behaviour. Here, we take a first step towards a framework for meta-learned continual learning. We define the meta-objective L τ meta as an incremental multitask objective that, for each sub-task!τ t in a given task sequence τ, averages the validation sub-task losses (Eq. 22) for the current and every preceding loss in the task sequence. The task meta-objective is defined by summing over all sub-tasks in the task sequence. For some sub-task parameterisation θ τt, we have As before, the full meta-objective is an expectation over the joint task parameter distribution (Eq. 11); for further details on the meta-objective, see Appendix C, Eq. 19. This meta-objective gives equal weight to all the tasks in the sequence by averaging the regression step loss over all sub-tasks where a prior sub-task should be learned or remembered. For example, losses from the first sub-task, defined using the interval [−5, −3], will appear nT times in the meta-objective. Conversely, the last sub-task in a sequence, defined on the interval, is learned only in the last n = 20 steps of task adaptation, and hence appers n times in the meta-objective. Normalising on number of appearances corrects for this bias. We trained warp-parameters using Adam and a meta-learning rate of 0.001, sampling 5 random tasks to form a meta-batch and repeating the process for 20 000 steps of meta-training. Results Figure 12 shows a breakdown of the validation loss across the 5 sequentially learned tasks over the 100 steps of online learning during task adaptation. Results are averaged over 100 random regression problem instances. The meta-learned WarpGrad optimiser reduces the loss of the task currently being learned in each interval while also largely retaining performance on previous tasks. There is an immediate relatively minor loss of performance, after which performance on previous tasks is retained. We hypothesise that this is because the meta-objectives averages over the full learning curve, as opposed to only the performance once a task has been adapted to. As such, the WarpGrad optimiser may allow for some degree of performance loss. Intriguingly, in all cases, after an initial spike in previous sub-task losses when switching to a new task, the spike starts to revert back some way towards optimal performance, suggesting that the WarpGrad optimiser facilitates positive backward transfer, without this being explicitly enforced in the meta-objective. Deriving a principled meta-objective for continual learning is an exciting area for future research. | We propose a novel framework for meta-learning a gradient-based update rule that scales to beyond few-shot learning and is applicable to any form of learning, including continual learning. | 1,230 | scitldr |
In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. Despite their outstanding performance on several machine learning tasks, deep neural networks have been shown to be susceptible to adversarial attacks BID20 BID4. These attacks come in the form of adversarial examples: carefully crafted perturbations added to a legitimate input sample. In the context of classification, these perturbations cause the legitimate sample to be misclassified at inference time BID20 BID4 BID16 BID11. Such perturbations are often small in magnitude and do not affect human recognition but can drastically change the output of the classifier. Recent literature has considered two types of threat models: black-box and white-box attacks. Under the black-box attack model, the attacker does not have access to the classification model parameters; whereas in the white-box attack model, the attacker has complete access to the model architecture and parameters, including potential defense mechanisms BID21 BID2.Various defenses have been proposed to mitigate the effect of adversarial attacks. These defenses can be grouped under three different approaches: modifying the training data to make the classifier more robust against attacks, e.g., adversarial training which augments the training data of the classifier with adversarial examples BID20 BID4, modifying the training procedure of the classifier to reduce the magnitude of gradients, e.g., defensive distillation BID18, and attempting to remove the adversarial noise from the input samples BID6 BID13. All of these approaches have limitations in the sense that they are effective against either white-box attacks or black-box attacks, but not both BID21 BID13. Furthermore, some of these defenses are devised with specific attack models in mind and are not effective against new attacks. In this paper, we propose a novel defense mechanism which is effective against both white-box and black-box attacks. We propose to leverage the representative power of Generative Adversarial Networks (GAN) to diminish the effect of the adversarial perturbation, by "projecting" input images onto the range of the GAN's generator prior to feeding them to the classifier. In the GAN framework, two models are trained simultaneously in an adversarial setting: a generative model that emulates the data distribution, and a discriminative model that predicts whether a certain input came from real data or was artificially created. The generative model learns a mapping G from a low-dimensional vector z ∈ R k to the high-dimensional input sample space R n. During training of the GAN, G is encouraged to generate samples which resemble the training data. It is, therefore, expected that legitimate samples will be close to some point in the range of G, whereas adversarial samples will be further away from the range of G. Furthermore, "projecting" the adversarial examples onto the range of the generator G can have the desirable effect of reducing the adversarial perturbation. The projected output, computed using Gradient Descent (GD), is fed into the classifier instead of the original (potentially adversarially modified) image. We empirically demonstrate that this is an effective defense against both black-box and white-box attacks on two benchmark image datasets. The rest of the paper is organized as follows. We introduce the necessary regarding known attack models, defense mechanisms, and GANs in Section 2. Our defense mechanism, which we call Defense-GAN, is formally motivated and introduced in Section 3. Finally, experimental , under different threat models, as well as comparisons to other defenses are presented in Section 4. In this work, we propose to use GANs for the purpose of defending against adversarial attacks in classification problems. Before detailing our approach in the next section, we explain related work in three parts. First, we discuss different attack models employed in the literature. We, then, go over related defense mechanisms against these attacks and discuss their strengths and shortcomings. Lastly, we explain necessary information regarding GANs. Various attack models and algorithms have been used to target classifiers. All attack models we consider aim to find a perturbation δ to be added to a (legitimate) input x ∈ R n, ing in the adversarial examplex = x + δ. The ∞ -norm of the perturbation is denoted by BID4 and is chosen to be small enough so as to remain undetectable. We consider two threat levels: black-and white-box attacks. White-box models assume that the attacker has complete knowledge of all the classifier parameters, i.e., network architecture and weights, as well as the details of any defense mechanism. Given an input image x and its associated ground-truth label y, the attacker thus has access to the loss function J(x, y) used to train the network, and uses it to compute the adversarial perturbation δ. Attacks can be targeted, in that they attempt to cause the perturbed image to be misclassified to a specific target class, or untargeted when no target class is specified. In this work, we focus on untargeted white-box attacks computed using the Fast Gradient Sign Method (FGSM) BID4, the Randomized Fast Gradient Sign Method (RAND+FGSM) BID21, and the Carlini-Wagner (CW) attack BID2. Although other attack models exist, such as the Iterative FGSM, the Jacobian-based Saliency Map Attack (JSMA) BID16, and Deepfool , we focus on these three models as they cover a good breadth of attack algorthims. FGSM is a very simple and fast attack algorithm which makes it extremely amenable to real-time attack deployment. On the other hand, RAND+FGSM, an equally simple attack, increases the power of FGSM for white-box attacks BID21, and finally, the CW attack is one of the most powerful white-box attacks to-date BID2.Fast Gradient Sign Method (FGSM) Given an image x and its corresponding true label y, the FGSM attack sets the perturbation δ to: DISPLAYFORM0FGSM BID4 was designed to be extremely fast rather than optimal. It simply uses the sign of the gradient at every pixel to determine the direction with which to change the corresponding pixel value. Randomized Fast Gradient Sign Method (RAND+FGSM) The RAND+FGSM BID21 attack is a simple yet effective method to increase the power of FGSM against models which were adversarially trained. The idea is to first apply a small random perturbation before using FGSM. More explicitly, for α <, random noise is first added to the legitimate image x: DISPLAYFORM1 Then, the FGSM attack is computed on x, ing iñ DISPLAYFORM2 The Carlini-Wagner (CW) attack The CW attack is an effective optimization-based attack model BID2. In many cases, it can reduce the classifier accuracy to almost 0% BID2 BID13. The perturbation δ is found by solving an optimization problem of the form: DISPLAYFORM3 where f is an objective function that drives the example x to be misclassified, and c > 0 is a suitably chosen constant. The 2, 0, and ∞ norms are considered. We refer the reader to BID2 for details regarding the approach to solving and setting the constant c. For black-box attacks we consider untargeted FGSM attacks computed on a substitute model. As previously mentioned, black-box adversaries have no access to the classifier or defense parameters. It is further assumed that they do not have access to a large training dataset but can query the targeted DNN as a black-box, i.e., access labels produced by the classifier for specific query images. The adversary trains a model, called substitute, which has a (potentially) different architecture than the targeted classifier, using a very small dataset augmented by synthetic images labeled by querying the classifier. Adversarial examples are then found by applying any attack method on the substitute network. It was found that such examples designed to fool the substitute often end up being misclassified by the targeted classifier BID20. In other words, black-box attacks are easily transferrable from one model to the other. Various defense mechanisms have been employed to combat the threat from adversarial attacks. In what follows, we describe one representative defense strategy from each of the three general groups of defenses. A popular approach to defend against adversarial noise is to augment the training dataset with adversarial examples BID20 BID4 BID14. Adversarial examples are generated using one or more chosen attack models and added to the training set. This often in increased robustness when the attack model used to generate the augmented training set is the same as that used by the attacker. However, adversarial training does not perform as well when a different attack strategy is used by the attacker. Additionally, it tends to make the model more robust to white-box attacks than to black-box attacks due to gradient masking BID17 BID21. Defensive distillation BID18 trains the classifier in two rounds using a variant of the distillation BID7 method. This has the desirable effect of learning a smoother network and reducing the amplitude of gradients around input points, making it difficult for attackers to generate adversarial examples BID18. It was, however, shown that, while defensive distillation is effective against white-box attacks, it fails to adequately protect against black-box attacks transferred from other networks BID2. Recently, BID13 introduced MagNet as an effective defense strategy. It trains a reformer network (which is an auto-encoder or a collection of auto-encoders) to move adversarial examples closer to the manifold of legitimate, or natural, examples. When using a collection of auto-encoders, one reformer network is chosen at random at test time, thus strengthening the defense. It was shown to be an effective defense against gray-box attacks where the attacker knows everything about the network and defense, except the parameters. MagNet is the closest defense to our approach, as it attempts to reform an adversarial sample using a learnt auto-encoder. The main differences between MagNet and our approach are: we use GANs instead of auto-encoders, and, most importantly, we use GD minimization to find latent codes as opposed to a feedforward encoder network. This makes Defense-GAN more robust, especially against white-box attacks. GANs, originally introduced by, consist of two neural networks, G and D. G: R k → R n maps a low-dimensional latent space to the high dimensional sample space of x. D is a binary neural network classifier. In the training phase, G and D are typically learned in an adversarial fashion using actual input data samples x and random vectors z. An isotropic Gaussian prior is usually assumed on z. While G learns to generate outputs G(z) that have a distribution similar to that of x, D learns to discriminate between "real" samples x and "fake" samples G(z). D and G are trained in an alternating fashion to minimize the following min-max loss: DISPLAYFORM0 It was shown that the optimal GAN is obtained when the ing generator distribution p g = p data.However, GANs turned out to be difficult to train in practice BID5, and alternative formulations have been proposed. introduced Wasserstein GANs (WGANs) which are a variant of GANs that use the Wasserstein distance, ing in a loss function with more desirable properties: DISPLAYFORM1 In this work, we use WGANs as our generative model due to the stability of their training methods, especially using the approach in BID5.3 PROPOSED DEFENSE-GAN Figure 1: Overview of the Defense-GAN algorithm. As mentioned in Section 2.3, the GAN min-max loss in admits a global optimum when p g = p data. It can be similarly shown that WGAN admits an optimum to its own minmax loss in, when the set {x | p g (x) = p data (x)} has zero Lebesgue-measure. Formally, Lemma 1 A generator distribution p g is a global optimum for the WGAN min-max game defined in, if and only if p g (x) = p data (x) for all x ∈ R n, potentially except on a set of zero Lebesguemeasure. A sketch of the proof can be found in Appendix A.Additionally, it was shown that, if G and D have enough capacity to represent the data, and if the training algorithm is such that p g converges to p data, then DISPLAYFORM0 where G t is the generator of a GAN or WGAN 1 after t steps of its training algorithm BID8. This serves to show that, under ideal conditions, the addition of the GAN reconstruction loss minimization step should not affect the performance of the classifier on natural, legitimate samples, as such samples should be almost exactly recovered. Furthermore, we hypothesize that this step will help reduce the adversarial noise which follows a different distribution than that of the GAN training examples. Defense-GAN is a defense strategy to combat both white-box and black-box adversarial attacks against classification networks. At inference time, given a trained GAN generator G and an image x to be classified, z * is first found so as to minimize DISPLAYFORM0 G(z *) is then given as the input to the classifier. The algorithm is illustrated in Figure 1. As FORMULA7 is a highly non-convex minimization problem, we approximate it by doing a fixed number L of GD steps using R different random initializations of z (which we call random restarts), as shown in Figures 1 and 2.The GAN is trained on the available classifier training dataset in an unsupervised manner. The classifier can be trained on the original training images, their reconstructions using the generator G, or a combination of the two. As was discussed in Section 3.1, as long as the GAN is appropriately trained and has enough capacity to represent the data, original clean images and their reconstructions should not defer much. Therefore, these two classifier training strategies should, at least theoretically, not differ in performance. Compared to existing defense mechanisms, our approach is different in the following aspects: 1. Defense-GAN can be used in conjunction with any classifier and does not modify the classifier structure itself. It can be seen as an add-on or pre-processing step prior to classification.2. If the GAN is representative enough, re-training the classifier should not be necessary and any drop in performance due to the addition of Defense-GAN should not be significant.3. Defense-GAN can be used as a defense to any attack: it does not assume an attack model, but simply leverages the generative power of GANs to reconstruct adversarial examples.4. Defense-GAN is highly non-linear and white-box gradient-based attacks will be difficult to perform due to the GD loop. A detailed discussion about this can be found in Appendix B. We assume three different attack threat levels:1. Black-box attacks: the attacker does not have access to the details of the classifier and defense strategy. It therefore trains a substitute network to find adversarial examples.2. White-box attacks: the attacker knows all the details of the classifier and defense strategy. It can compute gradients on the classifier and defense networks in order to find adversarial examples.3. White-box attacks, revisited: in addition to the details of the architectures and parameters of the classifier and defense, the attacker has access to the random seed and random number generator. In the case of Defense-GAN, this means that the attacker knows all the random initializations {z DISPLAYFORM0 We compare our method to adversarial training BID4 and MagNet BID13 under the FGSM, RAND+FGSM, and CW (with 2 norm) white-box attacks, as well as the FGSM black-box attack. Details of all network architectures used in this paper can be found in Appendix C. When the classifier is trained using the reconstructed images (G(z *)), we refer to our method as Defense-GAN-Rec, and we use Defense-GAN-Orig when the original images (x) are used to train the classifier. Our GAN follows the WGAN training procedure in BID5, and details of the generator and discriminator network architectures are given in TAB4. The reformer network (encoder) for the MagNet baseline is provided in TAB5. Our implementation is based on TensorFlow BID0 and builds on open-source software: CleverHans by BID15 and improved WGAN training by BID5. We use machines equipped with NVIDIA GeForce GTX TITAN X GPUs. In our experiments, we use two different image datasets: the MNIST handwritten digits dataset BID10 and the Fashion-MNIST (F-MNIST) clothing articles dataset BID22. Both datasets consist of 60, 000 training images and 10, 000 testing images. We split the training images into a training set of 50, 000 images and hold-out a validation set containing 10, 000 images. For white-box attacks, the testing set is kept the same (10, 000 samples). For black-box attacks, the testing set is divided into a small hold-out set of 150 samples reserved for adversary substitute training, as was done in, and the remaining 9, 850 samples are used for testing the different methods. In this section, we present experimental on FGSM black-box attacks. As previously mentioned, the attacker trains a substitute model, which could differ in architecture from the targeted model, using a limited dataset consisting of 150 legitimate images augmented with synthetic images labeled using the target classifier. The classifier and substitute model architectures used and referred to throughout this section are described in TAB3 in the Appendix. In TAB0, we present our classification accuracy and compare to other defense methods. As can be seen, FGSM black-box attacks were successful at reducing the classifier accuracy by up to 70%. All considered defense mechanisms are relatively successful at diminishing the effect of the attacks. We note that, as expected, the performance of Defense-GAN-Rec and that of Defense-GAN-Orig are very close. In addition, they both perform consistently well across different classifier and substitute model combinations. MagNet also performs in a consistent manner, but achieves lower accuracy than Defense-GAN. Two adversarial training defenses are presented: the first one obtains the adversarial examples assuming the same attack = 0.3, and the second assumes a different = 0.15. With incorrect knowledge of, the performance of adversarial training generally decreases. In addition, the classification performance of this defense method has very large variance across the different architectures. It is worth noting that adversarial training defense is only fit against FGSM attacks, because the adversarially augmented data, even with a different, is generated using the same method as the black-box attack (FGSM). In contrast, Defense-GAN and MagNet are general defense mechanisms which do not assume a specific attack model. The performances of defenses on the F-MNIST dataset, shown in TAB1, are noticeably lower than on MNIST. This is due to the large = 0.3 in the FGSM attack. Please see Appendix D for qualitative examples showing that = 0.3 represents very high noise, which makes F-MNIST images difficult to classify, even by a human. In addition, the Defense-GAN parameters used in this experiment were kept the same for both Tables, in order to study the effect of dataset complexity, and can be further optimized as investigated in the next section. FIG1 shows the effect of varying the number of GD iterations L as well as the random restarts R used to compute the GAN reconstructions of input images. Across different L and R values, Defense-GAN-Rec and Defense-GAN-Orig have comparable performance. Increasing L has the expected effect of improving performance when no attack is present. Interestingly, with an FGSM attack, the classification performance decreases after a certain L value. With too many GD iterations on the mean squared error (MSE) ||G(z) − (x + δ)|| 2 2, some of the adversarial noise components are retained. In the right Figure, the effect of varying R is shown to be extremely pronounced. This is due to the non-convex nature of the MSE, and increasing R enables us to sample different local minima. We now investigate the effect of changing the attack in Table 3. As expected, with higher, the FGSM attack is more successful, especially on the F-MNIST dataset where the noise norm seems to have a more pronounced effect with nearly 37% drop in performance between = 0.1 and 0.3. Figure 7 in Appendix D shows adversarial samples as well as their reconstructions with Defense-GAN at different values of. We can see that for large, the class is difficult to discern, even for the human eye. Even though it seems that increasing is a desirable strategy for the attacker, this increases the likelihood that the adversarial noise is discernible and therefore the attack is detected. It is trivial for the attacker to provide adversarial images at very high, and a good measure of an attack's strength is its ability to affect performance at low. In fact, in the next section, we discuss how Defense-GAN can be used to not only diminish the effect of attacks, but to also detect them. We intuitively expect that clean, unperturbed images will lie closer to the range of the Defense-GAN generator G than adversarial examples. This is due to the fact that G was trained to produce images which resemble the legitimate data. In light of this observation, we propose to use the MSE of an image with it is reconstruction from as a "metric" to decide whether or not the image was Table 3: Classification accuracy of Model F using Defense-GAN (L = 400, R = 10), under FGSM black-box attacks for various noise norms and substitute Model E. adversarially manipulated. In order words, for a given threshold θ > 0, the hypothesis test is: We compute the reconstruction MSEs for every image from the test dataset, and its adversarially manipulated version using FGSM. We show the Receiver Operating Characteristic (ROC) curves as well as the Area Under the Curve (AUC) metric for different Defense-GAN parameters and values in FIG2. The show that this attack detection strategy is effective especially when the number of GD iterations L and random restarts R are large. From the left and middle Figures, we can conclude that the number of random restarts plays a very important role in the detection false positive and true positive rates as was discussed in Section 4.1.1. Furthermore, when is very small, it becomes difficult to detect attacks at low false positive rates. We now present on white-box attacks using three different strategies: FGSM, RAND+FGSM, and CW. We perform the CW attack for 100 iterations of projected GD, with learning rate 10.0, and use c = 100 in equation FORMULA3. Table 4 shows the classification performance of different classifier models across different attack and defense strategies. We note that Defense-GAN significantly outperforms the two other baseline defenses. We even give the adversarial attacker access to the random initializations of z. However, we noticed that the performance does not change much when the attacker does not know the initialization. Adversarial training was done using FGSM to generate the adversarial samples. It is interesting to mention that when CW attack is used, adversarial training performs extremely poorly. As previously discussed, adversarial training does not generalize well against different attack methods. Due to the loop of L steps of GD, Defense-GAN is resilient to GD-based white-box attacks, since the attacker needs to "un-roll" the GD loop and propagate the gradient of the loss all the way across L steps. In fact, from Table 4 L = 25, the performance of the same network drops to 0.947 (more than 5% drop). This shows that using a larger L significantly increases the robustness of Defense-GAN against GD-based whitebox attacks. This comes at the expense of increased inference time complexity. We present a more detailed discussion about the difficulty of GD-based white-box attacks in Appendix B and time complexity in Appendix G. Additional white-box experimental on higher-dimensional images are reported in Appendix F. Table 4: Classification accuracies of different classifier models using various defense strategies on the MNIST (top) and F-MNIST (bottom) datasets, under FGSM, RAND+FGSM, and CW white-box attacks. Defense-GAN has L = 200 and R = 10. Classifier In this paper, we proposed Defense-GAN, a novel defense strategy utilizing GANs to enhance the robustness of classification models against black-box and white-box adversarial attacks. Our method does not assume a particular attack model and was shown to be effective against most commonly considered attack strategies. We empirically show that Defense-GAN consistently provides adequate defense on two benchmark computer vision datasets, whereas other methods had many shortcomings on at least one type of attack. It is worth mentioning that, although Defense-GAN was shown to be a feasible defense mechanism against adversarial attacks, one might come across practical difficulties while implementing and deploying this method. The success of Defense-GAN relies on the expressiveness and generative power of the GAN. However, training GANs is still a challenging task and an active area of research, and if the GAN is not properly trained and tuned, the performance of Defense-GAN will suffer on both original and adversarial examples. Moreover, the choice of hyper-parameters L and R is also critical to the effectiveness of the defense and it may be challenging to tune them without knowledge of the attack. A OPTIMALITY OF p g = p DATA FOR WGANS Sketch of proof of Lemma 1: The WGAN min-max loss is given by: DISPLAYFORM0 DISPLAYFORM1 For a fixed G, the optimal discriminator D which maximizes V W (D, G) is such that: DISPLAYFORM2 Plugging D * G back into, we get: DISPLAYFORM3 }. Clearly, to minimize, we need to set p data (x) = p g (x) for x ∈ X. Then, since both pdfs should integrate to 1, DISPLAYFORM4 However, this is a contradiction since p g (x) < p data (x) for x ∈ X c, unless µ(X c) = 0 where µ is the Lebesgue measure. This concludes the proof. In order to perform a GD-based white-box attack on models using Defense-GAN, an attacker needs to compute the gradient of the output of the classifier with respect to the input. From Figure 1, the generator and the classifier can be seen as one, combined, feedforward network, through which it is easy to propagate gradients. The difficulty lies in the orange box of the GD optimization detailed in FIG0.For the sake of simplicity, let's assume that R = 1. Define L(x, z) = ||G(z) − x|| 2 2. Then z * = z L, which is computed recursively as follows: DISPLAYFORM0 and so on. Therefore, computing the gradient of z * with respect to x involves a large number (L) of recursive chain rules and high-dimensional Jacobian tensors. This computation gets increasingly prohibitive for large L. We describe the neural network architectures used throughout the paper. The detail of models A through F used for classifier and substitute networks can be found in TAB3. In TAB4, the GAN architectures are described, and in TAB5, the encoder architecture for the MagNet baseline is given. In what follows:• Conv(m, k × k, s) refers to a convolutional layer with m feature maps, filter size k × k, and stride s• ConvT(m, k × k) refers to the transpose (gradient) of Conv (sometimes referred to as "deconvolution") with m feature maps, filter size k × k, and stride s• FC(m) refers to a fully-connected layer with m outputs• Dropout(p) refers to a dropout layer with probability p• ReLU refers to the Rectified Linear Unit activation• LeakyReLU(α) is the leaky version of the Rectified Linear Unit with parameter α Generator Discriminator DISPLAYFORM0 We report on white-box attacks on the CelebFaces Attributes dataset (CelebA) BID12 in TAB0. The CelebA dataset is a large-scale face dataset consisting of more than 200, 000 face images, split into training, validation, and testing sets. The RGB images were center-cropped and resized to 64 × 64. We performed the task of gender classification on this dataset. The GAN architecture is the same as that in TAB4, except for an additional ConvT(128, 5 × 5, 1) layer in the generator network. G TIME COMPLEXITYThe computational complexity of reconstructing an image using Defense-GAN is on the order of the number of GD iterations performed to estimate z *, multiplied by the time to compute gradients. The number of random restarts R has less effect on the running time, since random restarts are independent and can run in parallel if enough resources are available. TAB0 shows the average running time, in seconds, to find the reconstructions of MNIST and F-MNIST images on one NVIDIA GeForce GTX TITAN X GPU. For most applications, these running times are not prohibitive. We can see a tradeoff between running time and defense robustness as well as accuracy. | Defense-GAN uses a Generative Adversarial Network to defend against white-box and black-box attacks in classification models. | 1,231 | scitldr |
We study the problem of learning similarity functions over very large corpora using neural network embedding models. These models are typically trained using SGD with random sampling of unobserved pairs, with a sample size that grows quadratically with the corpus size, making it expensive to scale. We propose new efficient methods to train these models without having to sample unobserved pairs. Inspired by matrix factorization, our approach relies on adding a global quadratic penalty and expressing this term as the inner-product of two generalized Gramians. We show that the gradient of this term can be efficiently computed by maintaining estimates of the Gramians, and develop variance reduction schemes to improve the quality of the estimates. We conduct large-scale experiments that show a significant improvement both in training time and generalization performance compared to sampling methods. We consider the problem of learning a similarity function h: X × Y → R, that maps each pair of items, represented by their feature vectors (x, y) ∈ X × Y, to a real number h(x, y), representing their similarity. We will refer to x and y as the left and right feature vectors, respectively. Many problems can be cast in this form: In natural language processing, x represents a context (e.g. a bag of words), y represents a candidate word, and the target similarity measures the likelihood to observe y in context x BID14 BID16 BID13. In recommender systems, x represents a user query, y represents a candidate item, and the target similarity is a measure of relevance of item y to query x, e.g. a movie rating BID0, or the likelihood to watch a given movie BID12 ). Other applications include image similarity, where x and y are pixel-representations of images BID5 BID6 ), and network embedding models BID10 ), where x and y are nodes in a graph and the similarity is whether an edge connects them. A popular approach to learning similarity functions is to train an embedding representation of each item, such that items with high similarity are mapped to vectors that are close in the embedding space. A common property of such problems is that only a small subset of all possible pairs X × Y is present in the training set, and those examples typically have high similarity. Training exclusively on observed examples has been demonstrated to yield poor generalization performance. Intuitively, when trained only on observed pairs, the model places the embedding of a given item close to similar items, but does not learn to place it far from dissimilar ones . Taking into account unobserved pairs is known to improve the embedding quality in many applications, including recommendation BID12 BID1 and word analogy tasks . This is often achieved by adding a low-similarity prior on all pairs, which acts as a repulsive force between all embeddings. But because it involves a number of terms quadratic in the corpus size, this term is computationally intractable (except in the linear case), and it is typically optimized using sampling: for each observed pair in the training set, a set of random unobserved pairs is sampled and used to compute an estimate of the repulsive term. But as the corpus size increases, the quality of the estimates deteriorates unless the sample size is increased, which limits scalability. In this paper, we address this issue by developing new methods to efficiently estimate the repulsive term, without sampling unobserved pairs. Our approach is inspired by matrix factorization models, which correspond to the special case of linear embedding functions. They are typically trained using alternating least squares BID12, or coordinate descent methods BID2, which circumvent the computational burden of the repulsive term by writing it as a matrix-inner-product of two Gramians, and computing the left Gramian before optimizing over the right embeddings, and viceversa. Unfortunately, in non-linear embedding models, each update of the model parameters induces a simulateneous change in all embeddings, making it impractical to recompute the Gramians at each iteration. As a , the Gramian formulation has been largely ignored in the non-linear setting, where models are instead trained using stochastic gradient methods with sampling of unobserved pairs, see BID7. were, to our knowledge, the first to attempt leveraging the Gramian formulation in the non-linear case. They consider a model where only one of the embedding functions is non-linear, and show that the gradient can be computed efficiently in that case. Their is remarkable in that it allows exact gradient computation, but this unfortunately does not generalize to the case where both embedding functions are non-linear. Contributions We propose new methods that leverage the Gramian formulation in the non-linear case, and that, unlike previous approaches, are efficient even when both left and right embeddings are non-linear. Our methods operate by maintaining stochastic estimates of the Gram matrices, and using different variance reduction schemes to improve the quality of the estimates. We perform several experiments that show these methods scale far better than traditional sampling approaches on very large corpora. We start by reviewing preliminaries in Section 2, then derive the Gramian-based methods and analyze them in Section 3. We conduct large-scale experiments on the Wikipedia dataset in Section 4, and provide additional experiments in the appendix. All the proofs are deferred to Appendix A. We consider models that consist of two embedding functions u: R d ×X → R k and v: R d ×Y → R k, which map a parameter vector 1 θ ∈ R d and feature vectors x, y to embeddings u(θ, x), v(θ, y) ∈ R k. The output of the model is the dot product 2 of the embeddings h θ (x, y) = u(θ, x), v(θ, y), where ·, · denotes the usual inner-product on R k. Low-rank matrix factorization is a special case, in which the left and right embedding functions are linear in x and y. Figure 1 illustrates a non-linear model, in which each embedding function is given by a feed-forward neural network. 3 We denote the training set by T = {(x i, y i, s i) ∈ X × Y × R} i∈{1,...,n}, where x i, y i are the feature vectors and s i is the target similarity for example i. To make notation more compact, we will use u i (θ), v i (θ) as a shorthand for u(θ, x i), v(θ, y i), respectively. As discussed in the introduction, we also assume that we are given a low-similarity prior p ij ∈ R for all pairs (i, j) ∈ {1, . . ., n} 2. Given a differentiable scalar loss function: R × R → R, the objective function is given by DISPLAYFORM0 where the first term measures the loss on observed data, the second term penalizes deviations from the prior, and λ is a positive hyper-parameter that trades-off the two terms. To simplify the discussion, we will assume a uniform zero prior p ij as in BID12, the general case is treated in Appendix B.To optimize this objective, existing methods rely on sampling to approximate the second term, and are usually referred to as negative sampling or candidate sampling, see BID7 BID1 for a survey. Due to the double sum in, the quality of the sampling estimates degrades as the corpus size increases, which can significantly increase training times. This can be alleviated by increasing the sample size, but does not scale to very large corpora. DISPLAYFORM1 Figure 1: A dot-product embedding model for a similarity function on X × Y. A different approach to solving, widely popular in matrix factorization, is to rewrite the double sum as the inner product of two Gram matrices. Let us denote by U θ ∈ R n×k the matrix of all left embeddings such that u i (θ) is the i-th row of U θ, and similarly for V θ ∈ R n×k. Then denoting the matrix inner-product by A, B = i,j A ij B ij, we can rewrite the double sum in as: DISPLAYFORM0 Now, using the adjoint property of the inner product, we have DISPLAYFORM1 and if we denote by u ⊗ u the outer product of a vector u by itself, and define the Gram matrices DISPLAYFORM2 the penalty term becomes DISPLAYFORM3 The Gramians are k × k PSD matrices, where k, the dimension of the embedding space, is much smaller than n -typically k is smaller than 1000, while n can be arbitrarily large. Thus, the Gramian formulation has a much lower computational complexity than the double sum formulation, and this reformulation is at the core of alternating least squares and coordinate descent methods BID12 BID2, which operate by computing the exact Gramian for one side, and solving for the embeddings on the other. However, these methods do not apply in the non-linear setting due to the implicit dependence on θ, as a change in the model parameters simultaneously changes all embeddings on both sides, making it intractable to recompute the Gramians at each iteration, so the Gramian formulation has not been used when training non-linear models. In the next section, we show that it can in fact be leveraged in the non-linear case. In order to leverage the Gramian formulation in the non-linear case, we start by rewriting the objective function in terms of the Gramians defined in. Let DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2. Intuitively, for each example i, −∇f i (θ) pulls the embeddings u i and v i close to each other (assuming a high similarity s i), while −∇g i (θ) creates a repulsive force between u i and all embeddings {v j} j∈{1,...,n}, and between v i and all {u j} j∈{1,...,n}, see Appendix D for further discussion, and illustration of the effect of this term. While the Gramians are expensive to recompute at each iteration, we can maintain PSD estimateŝ DISPLAYFORM3. Then the gradient of g(θ) can be approximated by the gradient (w.r.t. θ) of DISPLAYFORM4 as stated in the following proposition. DISPLAYFORM5 is an unbiased estimate of ∇g(θ).In a mini-batch setting, one can further averageĝ i over a batch of examples i ∈ B (which we do in our experiments), but we will omit batches to keep the notation concise. Next, we propose several methods for computing Gramian estimatesĜ u,Ĝ v, and discuss their tradeoffs. Since each Gramian can be written as a sum of rank-one terms, e.g. DISPLAYFORM6, a simple unbiased estimate can be obtained by sampling one term (or a batch) from this sum. We improve on this by using different variance reduction methods, which we discuss in the next two sections. Our first method is inspired by the stochastic average gradient (SAG) method (; BID9), which reduces the variance of the gradient estimates by maintaining a cache of individual gradients, and estimating the full gradient using this cache. Since each Gramian is a sum of outer-products (see equation FORMULA4), we can apply the same technique to estimate Gramians. For all i ∈ {1, . . ., n}, letû i,v i be a cache of the left and right embeddings respectively. We will denote by a superscript (t) the value of a variable at iteration t. LetŜ DISPLAYFORM0 i, which corresponds to the Gramian computed with the current caches. At each iteration t, an example i is drawn uniformly at random and the estimate of the Gramian is given bŷ DISPLAYFORM1 and similarly forĜ DISPLAYFORM2 v. This is summarized in Algorithm 1, where the model parameters are updated using SGD (line 10), but this update can be replaced with any first-order method. Here β can take one of the following values: β = 1 n, following SAG , or β = 1, following SAGA BID9. The choice of β comes with trade-offs that we briefly discuss below. We will denote the cone of positive semi-definite k × k matrices by S DISPLAYFORM3 While taking β = 1 gives an unbiased estimate, note that it does not guarantee that the estimates remain in S k +. In practice, this can cause numerical issues, but can be avoided by projectingĜ u,Ĝ v on S k +, using their eigenvalue decompositions. The per-iteration cost of maintaining the Gramian estimates is O(k) to update the caches, O(k 2) to update the estimatesŜ u,Ŝ v,Ĝ u,Ĝ v, and O(k 3) for projecting on S k +. Given the small size of the embedding dimension k, O(k 3) remains tractable. The memory cost is O(nk), since each embedding needs to be cached (plus a negligible O(k 2) for storing the Gramian estimates). This makes SAGram much less expensive than applying the original SAG(A) methods, which require maintaining caches of the gradients, this would incur a O(nd) memory cost, where d is the number of parameters of the model, and can be orders of magnitude larger than the embedding dimension k. However, O(nk) can still be prohibitively expensive when n is very large. In the next section, we propose a different method which does not incur this additional memory cost. DISPLAYFORM4 Update Gramian estimates (i ∼ Uniform(n)) 8: DISPLAYFORM5 Update model parameters then update caches (i ∼ Uniform(n)) 10: DISPLAYFORM6 To derive the second method, we reformulate problem as a two-player game. The first player optimizes over the parameters of the model θ, the second player optimizes over the Gramian estimateŝ G u,Ĝ v ∈ S k +, and they seek to minimize the respective losses DISPLAYFORM0 whereĝ i is defined in FORMULA10, and · F denotes the Frobenius norm. To justify this reformulation, we can characterize its first-order stationary points, as follows. DISPLAYFORM1 + is a first-order stationary point for if and only if θ is a first-order stationary point for problem FORMULA0 DISPLAYFORM2 Several stochastic first-order dynamics can be applied to problem, and Algorithm 2 gives a simple instance where each player implements SGD with a constant learning rate. In this case, the updates of the Gramian estimates (line 7) have a particularly simple form, since ∇Ĝ DISPLAYFORM3 and similarly forĜ v. One advantage of this form is that each update performs a convex combination between the current estimate and a rank-1 PSD matrix, thus guaranteeing that the estimates remain in S k +, without the need to project. The per-iteration cost of updating the estimates is O(k 2), and the memory cost is O(k 2) for storing the Gramians, which are both negligible. DISPLAYFORM4 Update model parameters (i ∼ Uniform) 8: DISPLAYFORM5 The update can also be interpreted as computing an online estimate of the Gramian by averaging rank-1 terms with decaying weights, thus we call the method Stochastic Online Gramian. Indeed, we have by induction on t,Ĝ DISPLAYFORM6 Intuitively, averaging reduces the variance of the estimator but introduces a bias, and the choice of the hyper-parameter α trades-off bias and variance. The next proposition quantifies this tradeoff under mild assumptions. DISPLAYFORM7 The first assumption simply bounds the variance of single-point estimates, while the second bounds the distance between two consecutive Gramians, a reasonable assumption, since in practice the changes in Gramians vanish as the trajectory θ (τ) converges. In the limiting case α = 1,Ĝu reduces to a single-point estimate, in which case the bias vanishes and the variance is maximal, while smaller values of α decrease variance and increase bias. This is confirmed in our experiments, as discussed in Section 4.2. We conclude this section by showing that candidate sampling methods (see BID7 BID1 for recent surveys) can be reinterpreted in terms of the Gramian formulation. These methods work by approximating the double-sum in using a random sample of pairs. Suppose a batch of pairs (i, j) ∈ B × B is sampled 4, and the double sum is approximated bỹ DISPLAYFORM0 where µ i, ν j are the inverse probabilities of sampling i, j respectively (to guarantee that the estimate is unbiased). Then applying a similar transformation to Section 2.2, one can show that DISPLAYFORM1 which is equivalent to computing two batch-estimates of the Gramians. Implementing existing methods using rather than can decrease their computional complexity in the large batch regime, for the following reason: the double-sum formulation involves a sum of |B||B | dot products of vectors in R k, thus computing its gradient costs O(k|B||B |). On the other hand, the Gramian formulation FORMULA0 is the inner product of two k × k matrices, each involving a sum over the batch, thus computing its gradient costs O(k 2 max(|B|, |B |)), which is cheaper when the batch size is larger than the embedding dimension k, a common situation in practice. With this formulation, the advantage of SOGram and SAGram becomes clear, as they use more embeddings to estimate Gramians (by caching or online averaging) than would be possible using candidate sampling. In this section, we conduct large-scale experiments on the Wikipedia dataset (Wikimedia Foundation). Additional experiments on MovieLens BID11 are given in Appendix F. Datasets We consider the problem of learning the intra-site links between Wikipedia pages. Given a pair of pages (x, y) ∈ X × X, the target similarity is 1 if there is a link from x to y, and 0 otherwise. Here a page is represented by a feature vector x = (x id, x ngrams, x cats), where x id is (a one-hot encoding of) the page URL, x ngrams is a bag-of-words representation of the set of n-grams of the page's title, and x cats is a bag-of-words representation of the categories the page belongs to. Note that the left and right feature spaces coincide in this case, but the target similarity is not necessarily symmetric (the links are directed edges). We carry out experiments on subsets of the Wikipedia graph corresponding to three languages: Simple English, French, and English, denoted respectively by simple, fr, and en. These subgraphs vary in size, and Table 1 shows some basic statistics for each set. Each set is partitioned into training and validation using a (90%, 10%) split. Table 1: Corpus sizes for each training set. Models We train non-linear embedding models consisting of a two-tower neural network as in Figure 1, where the left and right embedding functions map, respectively, the source and destination page features. The two embedding networks have the same structure: the input feature embeddings are concatenated then mapped through two hidden layers with ReLU activations. The input embeddings are shared between the two networks, and their dimensions are 50 for simple, 100 for fr, and 120 for en. Training methods The model is trained using a squared error loss, (s, s) = 1 2 (s − s) 2, optimized using SAGram, SOGram, and as baseline, SGD with candidate sampling, using different sampling strategies. The experiments reported in this section use a learning rate η = 0.01, a penalty coefficient λ = 10, and batch size 1024. These parameters correspond to the best performance of the baseline methods; we report additional with different hyper-parameter settings in Appendix E. For SAGram and SOGram, a batch B is used in the Gramian updates (line 8 in Algorithm 1 and line 6 in Algorithm 2, where we use a sum of rank-1 terms over the batch), and another batch B is used in the model parameter update. For the sampling baselines, the double sum is approximated by all pairs in the cross product (i, j) ∈ B × B, and for efficiency, we implement them using the Gramian formulation as discussed in Section 3.3, since we operate in a regime where the batch size is an order of magnitude larger than the embedding dimension k. In the first baseline method, uniform, items are sampled uniformly from the vocabulary (all pages are sampled with the same probability). The other baseline methods implement importance sampling similarly to BID3; BID14: in linear, the probability is proportional to the number of occurrences of the page in the training set, and in sqrt, the probability is proportional to the square root of the number of occurrences. DISPLAYFORM0 DISPLAYFORM1 In the first set of experiments, we evaluate the quality of the Gramian estimates using each method. In order to have a meaningful comparison, we fix a trajectory of model parameters (θ (t) ) t∈{1,...,T}, and evaluate how well each method tracks the true Gramians G u (θ (t) ), G v (θ (t) ) on that common trajectory. This experiment is done on Wikipedia simple (the smallest of the datasets) so that we can compute the exact Gramians by periodically computing the embeddings u i (θ (t) ), v i (θ (t) ) on the full training set at a given time t. We report in FIG3 the estimation error for each method, measured by the normalized Frobenius distance DISPLAYFORM0. In FIG3, we can observe that both variants of SAGram yield the best estimates, and that SOGram yields better estimates than the baselines. Among the baseline methods, importance sampling (both linear and sqrt) perform better than uniform. We also vary the batch size to evaluate its impact: increasing the batch size from 128 to 1024 improves the quality of all estimates, as expected, but it is worth noting that the estimates of SOGram with |B| = 128 have comparable quality to baseline estimates with |B| = 1024.In Appendix E, we show that a similar effect can be observed for gradient estimates, and we make a formal connection between Gramian and gradient estimation errors. In FIG3, we evaluate the bias-variance tradeoff discussed in Section 3.2, by comparing the estimates of SOGram with different learning rates α. We observe that higher values of α suffer from higher variance which persists throughout the trajectory. A lower α reduces the variance but introduces a bias, which is mostly visible during the early iterations. In order to evaluate the impact of the Gramian estimation quality on training speed and generalization, we compare the validation performance of SOGram to the sampling baselines, on each dataset (we do not use SAGram due to its prohibitive memory cost for corpus sizes of 1M or more). The models are trained with a fixed time budget of 20 hours for simple, 30 hours for fr and 50 hours for en. We estimate the mean average precision (MAP) at 10, by scoring, every 5 minutes, left items in the validation set against 50K random candidates (exhaustively scoring all candidates is prohibitively expensive, but this gives a reasonable approximation). The are reported in FIG4. Compared to the sampling baselines, SOGram exhibits faster training and better validation performance across all sampling strategies. TAB3 The improvement on simple is modest (between 4% and 10%), which can be explained by the relatively small corpus size (85K unique pages), in which case candidate sampling with a large batch size already yields decent estimates. On the larger corpora, we obtain more significant improvements: between 9% and 15% on fr and between 9% and 19% on en. It's interesting to observe that the best performance is consistently achieved by SOGram with linear importance sampling, even though linear performs slightly worse than other strategies in the baseline. SOGram also has a significant impact on training speed: if we measure the time it takes for SOGram to exceed the final validation performance of each baseline method, this time is a small fraction of the total budget. In our experiments, this fraction is between 10% and 17% for simple, between 23% and 30% for fr, and between 16% and 24% for en. Additional numerical are provided in Appendix E, where we evaluate the impact of other parameters, such as the effect of the batch size |B|, the learning rate η, and the Gramian learning rate α. For example, we show that the relative improvement of SOGram compared to the baselines is even larger when using smaller batches, and its generalization performance is more robust to the choice of batch size and learning rate. We showed that the Gramian formulation commonly used in low-rank matrix factorization can be leveraged for training non-linear embedding models, by maintaining estimates of the Gram matrices and using them to estimate the gradient. By applying variance reduction techniques to the Gramians, one can improve the quality of the gradient estimates, without relying on large sample size as is done in traditional sampling methods. This leads to a significant impact on training time and generalization quality, as indicated by our experiments. While we focused on problems with very large vocabulary size, where traditional approaches are inefficient, it will be interesting to evaluate our methods on other applications such as word-analogy tasks BID14. Another direction of future work is to extend this formulation to a larger family of penalty functions, such as the spherical loss family studied in (; BID8 Proof of Proposition 1. Starting from the expression of g(θ) = G u (θ), G v (θ), and applying the chain rule, we have DISPLAYFORM0 where J u (θ) denotes the Jacobian of G u (θ), an order-three tensor given by DISPLAYFORM1 and DISPLAYFORM2 Observing DISPLAYFORM3, and applying the chain rule, we have DISPLAYFORM4 where J u,i (θ) is the Jacobian of u i (θ) ⊗ u i (θ), and DISPLAYFORM5 an similarly for J v,i. We conclude by taking expectations in and using assumption thatĜ u,Ĝ v are independent of i. u, we have, DISPLAYFORM0 which is a sum of matrices in the PSD cone S k +.Proof of Proposition 3. Denoting by (F t) t≥0 the filtration generated by the sequence (θ (t) ) t≥0, and taking conditional expectations in, we have DISPLAYFORM1 + is a first-order stationary point of the game if and only if DISPLAYFORM2 The second and third conditions simply states that ∇Ĝ FORMULA0 is equivalent toĜ u = G u (θ) (and similarly, is equivalent toĜ v = G v (θ)). Using the expression of ∇g, we get that is equivalent to ∇f (θ) + λ∇g(θ) = 0. DISPLAYFORM3 Proof of Proposition 5. We start by proving the first bound. As stated in Section 3.2, we have, by induction on t,Ĝ DISPLAYFORM4 And by definition ofḠ (t), we haveḠ DISPLAYFORM5 ) are zero-mean random variables. Thus, taking the second moment, and using the first assumption (which simply states that the variance of ∆ (τ) u is bounded by σ 2 ), we have DISPLAYFORM6 which proves the first inequality.To prove the second inequality, we start from the definition ofḠ DISPLAYFORM7 u: DISPLAYFORM8 where the first equality uses that fact that DISPLAYFORM9 Focusing on the first term, and bounding G DISPLAYFORM10 u F ≤ (t − τ)δ by the triangle inequality, we get DISPLAYFORM11 Combining FORMULA2 and FORMULA0, we get the desired inequality. So far, we have assumed a uniform zero prior to simplify the notation. In this section, we relax this assumption. Suppose that the prior is given by a low-rank matrix P = QR, where Q, R ∈ R n×k P. In other words, the prior for a given pair (i, j) is given by the dot product of two vectors p ij = q i, r j. In practice, such a low-rank prior can be obtained, for example, by first training a simple low-rank matrix approximation of the target similarity matrix. Given this low-rank prior, the penalty term becomes DISPLAYFORM0 where c = Q Q, R R is a constant that does not depend on θ. Here, we used a superscript P in g P to disambiguate the zero-prior case. Now, if we define weighted embedding matrices DISPLAYFORM1 Finally, if we maintain estimatesĤ u,Ĥ v of H u (θ), H v (θ), respectively (using the methods proposed in Section 3), we can approximate ∇g P (θ) by the gradient of DISPLAYFORM2 Proposition 1 and Algorithms 1 and 2 can be generalized to the low-rank prior case by adding updates forĤ u,Ĥ v, and by using expression ofĝ P i when computing the gradient estimate. DISPLAYFORM3 Proof. Similar to the proof of Proposition 1.The generalized versions of SAGram and SOGram are stated below, where the differences compared to the zero prior case are highlighted. Note that, unlike the Gramian matrices, the weighted embedding matrices H u, H v are not symmetric, thus we do not project their estimates. Algorithm 3 SAGram (Stochastic Average Gramian) with low-rank prior 1: Input: Training data {(x i, y i, s i)} i∈{1,...,n}, low-rank priors {q i, r i} i∈{1,...,n} 2: Initialization phase DISPLAYFORM4 Update weighted embedding estimates 11: DISPLAYFORM5 Update model parameters then update caches (i ∼ Uniform(n)) 14: DISPLAYFORM6 Algorithm 4 SOGram (Stochastic Online Gramian) with low-rank prior 1: Input: Training data {(x i, y i, s i)} i∈{1,...,n}, low-rank priors {q i, r i} i∈{1,...,n} 2: Initialization phase DISPLAYFORM7 Update weighted embedding estimates 9: DISPLAYFORM8 Update model parameters (i ∼ Uniform(n)) 11: DISPLAYFORM9 In additional to using a non-uniform prior, it can also be desirable to use non-uniform weights in the penalty term, for example to balance the contribution of frequent and infrequent items to the penalty term. We discuss how to adapt our algorithms to the non-uniform weights case. Suppose that the penalty function is given by DISPLAYFORM10 where a i, b j are positive left and right weights, respectively. Here we used a superscript W in g W to disambiguate the uniform-weight case. Then using a similar transformation to Section 2.2, we can rewrite g W as follows: DISPLAYFORM11 i.e. g W is the inner-product of two weighted Gramians. Both SAGram and SOGram can be generalized to this case, by maintaining estimates of the weighted Gramians, one simply needs to scale the contribution of each term u i ⊗ u i by the appropriate embedding weight a i (and similarly for the right embedding).Remark Here we discussed the case of a rank-one weight matrix, i.e. when the unobserved weight matrix can be written as W = a ⊗ b for a given left and right weight vectors a, b. The weight matrix cannot be arbitrary (as specifying n 2 individual weights is prohibitively expensive in many applications such as the experiments of this paper), thus one needs a consice description of the weights matrix. One such description is the sum of a sparse and low-rank matrix, and one can generalize SAGram and SOGram to this case: the sparse part of the weight matrix can be optimized explicitly, and the low-rank part can be optimized using weighted Gramians, by generalizing the argument of the previous paragraph. In this section, we briefly discuss different interpretations of the Gramian inner-product g(θ). Starting from the expression of g(θ) and the definition of the Gram matrices, we have DISPLAYFORM0 which is a quadratic form in the left embeddings u i (and similarly for v j, by symmetry). In particular, the partial derivative of the Gramian term with respect to an embedding u i is DISPLAYFORM1. Thus the gradient of g(θ) with respect to u i is an average of scaled projections of u i on each of the right embeddings v j, and moving in the direction of the negative gradient simply moves u i away from regions of the embedding space with a high density of right embeddings. This corresponds to the intuition discussed in the introduction: the purpose of the g(θ) term is precisely to push left and right embeddings away from each other, to avoid placing embeddings of dissimilar items near each other, a phenomenon referred to as folding of the embedding space . In order to illustrate the effect of this term on the embedding distributions, we visualize, in FIG6, the distribution of the inner product u i (θ (t) ), v j (θ (t) ), for random pairs (i, j), and for observed pairs (i = j), and how these distributions change as t increases. The plots are generated for the Wikipedia en model described in Section 4, trained with SOGram (α = 0.01), with two different values of the penalty coefficient, λ = 10 −2 and λ = 10. In both cases, the distribution for observed pairs remains concentrated around values close to 1, as one expects (recall that the target similarity is 1 for observed pairs, i.e. pairs of connected pages in the Wikipedia graph). The distributions for random pairs, however, are very different: with λ = 10, the distribution quickly concentrates around a value close to 0, while with λ = 10 −2 the distribution is more flat, and a large proportion of pairs have a high inner-product. This indicates that with a lower λ, the model is more likely to fold, i.e. place embeddings of unrelated items near each other. This is consistent with the validation MAP, reported in FIG7. With λ = 10 −2, the validation MAP increases very slowly, and remains two orders of magnitude smaller than the model trained with λ = 10. The figure also shows that when λ is too large (λ = 103), the model is over-regularized and the MAP decreases. To conclude this section, we note that our methods also apply to a related regularizer introduced in BID1, called Global Orthogonal Regularization. The authors argue that when learning feature embedding representations, spreading out the embeddings is helpful for generalization, and propose to match the second moment of each embedding distribution with that of the uniform distribution. Formally, and using our notation, they use the penalty term max(g u (θ), 1/k)+max(g v (θ), 1/k), where k is the embedding dimension, g u (θ) = 1 n 2 n i=1 n j=1 u i, u j 2, and similarly for g v. They optimize this term using candidate sampling. We can also apply the same Gramian transformation as in Section 2.2 to write g DISPLAYFORM2, and we can similarly apply SAGram and SOGram to estimate both Gramians. Formally, the difference here is that one would penalize the inner-product of each Gramian with itself, instead of the inner-product of the two. One advantage of this regularizer is that it applies to a broader class of models, as it does not require the output of the model to be the dot-product of two embedding functions. The experiments in Section 4 indicate that our methods give better estimates of the Gramians, and a natural question is how this affects gradient estimation quality. First, one can make a formal connection between the two. Since DISPLAYFORM0 the estimation error of the gradient with respect to the left embeddings u is DISPLAYFORM1 This last expression can be interpreted as a Frobenius norm of the right Gramian estimation error G v −Ĝ v, weighted by the left Gramian G u, thus the gradient error is closely related to the Gramian error. FIG8 shows the gradient estimation quality on Wikipedia simple, measured by the normalized squared norm DISPLAYFORM2. The are similar to the Gramian estimation errors reported in FIG3. Comparing the different baselines methods on simple FIG4, we observe that uniform sampling performs better than sqrt, despite having a worse Gramian estimate according to FIG3. One possible explanation is that the sampling distribution affects both the quality of the Gramian estimates, and the frequency at which the item embeddings are updated, which in turn affects the MAP. In particular, tail items are updated more frequently under uniform than other distributions, and this may have a positive impact on the MAP. In addition to the experiments of Section 4, we also evaluated the effect of the Gramian learning rate α on the quality of the Gramian esimates and generalization performance on Wikipedia en. FIG9 shows the validation MAP of the SOGram method for different values of α (together with the basline for reference). This reflects the bias-variance tradeoff dicussed in Proposition 5: with a lower α, progress is initially slower (due to the bias introduced in the Gramian estimates), but the final performance is better. Given a limited training time budget, this suggests that a higher α can be preferable. We also evaluate the quality of the Gramian estimates, but due to the large vocabulary size in en, computing the exact Gramians is no longer feasible, so we approximate it using a large sample of 1M embeddings. The are reported in FIG10, which shows the normalized Frobenius distance between the Gramian estimatesĜ u and (the large sample approximation of) the true Gramian G u. The are similar to the experiment on simple: with a lower α, the estimation error is initially high, but decays to a lower value as training progresses, which can be explained by the bias-variance tradeoff discussed in Proposition 5. The tradeoff is affected by the trajectory of the true Gramians: smaller changes in the Gramians (captured by the parameter δ in Proposition 5) induce a smaller bias. In particular, changing the learning rate η of the main algorithm can affect the performance of the Gramian estimates by affecting the rate of change of the true Gramians. To investiage this effect, we ran the same experiment with two different learning rates, η = 0.01 as in Section 4, and a lower learning rate η = 0.002. The errors converge to similar values in both cases, but the error decay occurs much faster with smaller η, which is consistent with our analysis. In this section, we explore the effect of the batch size |B| and learning rate η on the performance of SOGram compared to the baselines. We ran the Wikipedia en experiment with different values of these hyperparameters, and report the final validation MAP in TAB6, which correspond to batch size 128 and 512 respectively. We can make several observations. First, the best performance is consistently achieved by SOGram with learning rate α = 0.001. Second, the relative improvement compared to the baseline is, in general, larger for smaller batch sizes. This can be explained intuitively by the fact that because of online averaging, the quality of the Gramian estimates with SOGram suffers less than with the sampling baseline. Finally, we can also observe that the final performance also seems more robust to the choice of batch size and learning rate, compared to the baseline. For example, with the larger learning rate η = 0.02, the performance degrades for all methods, but the drop in performance for the baseline is much more significant than for the SOGram methods. In this section, we report experiments on a regression task on MovieLens. Dataset The MovieLens dataset consists of movie ratings given by a set of users. In our notation, the left features x represent a user, the right features y represent an item, and the target similarity is the rating of movie y by user x. The data is partitioned into a training and a validation set using a (80%-20%) split. Table 5 gives a basic description of the data size. Note that it is comparable to the simple dataset in the Wikipedia experiments. Dataset # users # movies # ratings MovieLens 72K 10K 10M Table 5: Corpus size of the MovieLens dataset. Model We train a two-tower neural network model, as described in Figure 1, where each tower consists of an input layer, a hidden layer, and output embedding dimension k = 35. The left tower takes as input a one-hot encoding of a unique user id, and the right tower takes as input one-hot encodings of a unique movie id, the release year of the movie, and a bag-of-words representation of the genres of the movie. These input embeddings are concatenated and used as input to the right tower. Methods The model is trained using a squared loss (s, s) = 1 2 (s − s) 2, using SOGram with different values of α, and sampling as a baseline. We use a learning rate η = 0.05, and penalty coefficient λ = 1. We measure mean average precision on the trainig set and validation set, following the same procedure described in Section 4. The are given in FIG12.Results The are similar to those reported on the Wikipedia simple dataset, which is comparable in corpus size and number of observations to MovieLens. The best validation mean average precision is achieved by SOGram with α = 0.1 (for an improvement of 2.9% compared to the sampling baseline), despite its poor performance on the training set, which indicates that better estimation of g(θ) induces better regularization. The impact on training speed is also remarkable in this case, SOGram with α = 0.1 achieves a better validation performance in under 1 hour of training than the sampling baseline in 6 hours. | We develop efficient methods to train neural embedding models with a dot-product structure, by reformulating the objective function in terms of generalized Gram matrices, and maintaining estimates of those matrices. | 1,232 | scitldr |
For natural language understanding (NLU) technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks. By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks. GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models. We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task. However, the low absolute performance of our best model indicates the need for improved general NLU systems. The human ability to understand language is general, flexible, and robust. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a more unified model that can learn to execute a range of different linguistic tasks in different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE) benchmark: a collection of NLU tasks including question answering, sentiment analysis, and textual entailment, and an associated online platform for model evaluation, comparison, and analysis. GLUE does not place any constraints on model architecture beyond the ability to process single-sentence and sentence-pair inputs and to make corresponding predictions. For some GLUE tasks, training data is plentiful, but for others it is limited or fails to match the genre of the test set. GLUE therefore favors models that can learn to represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. None of the datasets in GLUE were created from scratch for the benchmark; we rely on preexisting datasets because they have been implicitly agreed upon by the NLP community as challenging and interesting. Four of the datasets feature privately-held test data, which will be used to ensure that the benchmark is used fairly. Table 1: Task descriptions and statistics. All tasks are single sentence or sentence pair classification, except STS-B, which is a regression task. MNLI has three classes; all other classification tasks have two. Test sets shown in bold use labels that have never been made public in any form. To better understand the challenged posed by GLUE, we conduct experiments with simple baselines and state-of-the-art sentence representation models. We find that unified multi-task trained models slightly outperform comparable models trained on each task separately. Our best multi-task model makes use of ELMo BID2, a recently proposed pre-training technique. However, this model still achieves a fairly low absolute score. Analysis with our diagnostic dataset reveals that our baseline models deal well with strong lexical signals but struggle with deeper logical structure. In summary, we offer: (i) A suite of nine sentence or sentence-pair NLU tasks, built on established annotated datasets and selected to cover a diverse range of text genres, dataset sizes, and degrees of difficulty.(ii) An online evaluation platform and leaderboard, based primarily on privately-held test data. The platform is model-agnostic, and can evaluate any method capable of producing on all nine tasks. (iii) An expert-constructed diagnostic evaluation dataset. (iv) Baseline for several major existing approaches to sentence representation learning. used a multi-task model with a shared sentence understanding component to jointly learn POS tagging, chunking, named entity recognition, and semantic role labeling. More recent work has explored using labels from core NLP tasks to supervise training of lower levels of deep neural networks BID10 ) and automatically learning cross-task sharing mechanisms for multi-task learning BID6 ).Beyond multi-task learning, much work in developing general NLU systems has focused on sentence-to-vector encoders (; , i.a.), leveraging unlabeled data (; BID2, labeled data , and combinations of these (; BID11 . In this line of work, a standard evaluation practice has emerged, recently codified as SentEval . Like GLUE, SentEval relies on a set of existing classification tasks involving either one or two sentences as inputs. Unlike GLUE, SentEval only evaluates sentenceto-vector encoders, making it well-suited for evaluating models on tasks involving sentences in isolation. However, cross-sentence contextualization and alignment are instrumental in achieving state-of-the-art performance on tasks such as machine translation BID0 BID13, question answering BID8, and natural language inference BID5. GLUE is designed to facilitate the development of these methods: It is model-agnostic, allowing for any kind of representation or contextualization, including models that use no explicit vector or symbolic representations for sentences whatsoever. GLUE also diverges from SentEval in the selection of evaluation tasks that are included in the suite. Many of the SentEval tasks are closely related to sentiment analysis, such as MR , SST BID9, CR , and SUBJ . Other tasks are so close to being solved that evaluation on them is relatively uninformative, such as MPQA and TREC question classification BID14. In GLUE, we attempt to construct a benchmark that is both diverse and difficult. introduce decaNLP, which also scores NLP systems based on their performance on multiple datasets. Their benchmark recasts the ten evaluation tasks as question answering, converting tasks like summarization and text-to-SQL semantic parsing into question answering using automatic transformations. That benchmark lacks the leaderboard and error analysis toolkit of GLUE, but more importantly, we see it as pursuing a more ambitious but less immediately practical goal: While GLUE rewards methods that yield good performance on a circumscribed set of tasks using methods like those that are currently used for those tasks, their benchmark rewards systems that make progress toward their goal of unifying all of NLU under the rubric of question answering. GLUE is centered on nine English sentence understanding tasks, which cover a broad range of domains, data quantities, and difficulties. As the goal of GLUE is to spur development of generalizable NLU systems, we design the benchmark such that good performance should require a model to share substantial knowledge (e.g., trained parameters) across all tasks, while still maintaining some taskspecific components. Though it is possible to train a single model for each task with no pretraining or other outside sources of knowledge and evaluate the ing set of models on this benchmark, we expect that our inclusion of several data-scarce tasks will ultimately render this approach uncompetitive. We describe the tasks below and in Table 1. Appendix A includes additional details. Unless otherwise mentioned, tasks are evaluated on accuracy and are balanced across classes. CoLA The Corpus of Linguistic Acceptability BID15 consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence. Following the authors, we use Matthews correlation coefficient as the evaluation metric, which evaluates performance on unbalanced binary classification and ranges from -1 to 1, with 0 being the performance of uninformed guessing. We use the standard test set, for which we obtained private labels from the authors. We report a single performance number on the combination of the in-and out-of-domain sections of the test set. SST-2 The Stanford Sentiment Treebank BID9 consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. We use the two-way (positive/negative) class split, and use only sentence-level labels. MRPC The Microsoft Research Paraphrase Corpus is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. Because the classes are imbalanced (68% positive), we follow common practice and report both accuracy and F1 score. QQP The Quora Question Pairs 2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. As in MRPC, the class distribution in QQP is unbalanced (63% negative), so we report both accuracy and F1 score. We use the standard test set, for which we obtained private labels from the authors. We observe that the test set has a different label distribution than the training set. The Semantic Textual Similarity Benchmark is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5; the task is to predict these scores. Follow common practice, we evaluate using Pearson and Spearman correlation coefficients. MNLI The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. We use the standard test set, for which we obtained private labels from the authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) sections. We also use and recommend the SNLI corpus as 550k examples of auxiliary training data. QNLI The Stanford Question Answering Dataset BID4 ) is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). We convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. This process of recasting existing datasets into NLI is similar to methods introduced in BID16 and expanded upon in. We call the converted dataset QNLI (Question-answering NLI). RTE The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. We combine the data from RTE1 , RTE2 , RTE3 , and RTE5 . 4 Examples are constructed based on news and Wikipedia text. We convert all datasets to a two-class split, where for three-class datasets we collapse neutral and contradiction into not entailment, for consistency. WNLI The Winograd Schema Challenge is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, we construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the I have never seen a hummingbird not flying. I have never seen a hummingbird. N E Table 3: Examples from the diagnostic set. Fwd (resp. Bwd) denotes the label when sentence 1 (resp. sentence 2) is the premise. Labels are entailment (E), neutral (N), or contradiction (C). Examples are tagged with the phenomena they demonstrate, and each phenomenon belongs to one of four broad categories (in parentheses).pronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of new examples derived from fiction books 5 that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. We call converted dataset WNLI (Winograd NLI). The GLUE benchmark follows the same evaluation model as SemEval and Kaggle. To evaluate a system on the benchmark, one must run the system on the provided test data for the tasks, then upload the to the website gluebenchmark.com for scoring. The benchmark site shows per-task scores and a macro-average of those scores to determine a system's position on the leaderboard. For tasks with multiple metrics (e.g., accuracy and F1), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The website also provides fine-and coarse-grained on the diagnostic dataset. See Appendix D for details. Drawing inspiration from the FraCaS suite and the recent Build-It-Break-It competition , we include a small, manually-curated test set for the analysis of system performance. While the main benchmark mostly reflects an application-driven distribution of examples, our diagnostic dataset highlights a pre-defined set of phenomena that we believe are interesting and important for models to capture. We show the full set of phenomena in TAB1.Each diagnostic example is an NLI sentence pair with tags for the phenomena demonstrated. The NLI task is well-suited to this kind of analysis, as it can easily evaluate the full set of skills involved in (ungrounded) sentence understanding, from resolution of syntactic ambiguity to pragmatic reasoning with world knowledge. We ensure the data is reasonably diverse by producing examples for a variety of linguistic phenomena and basing our examples on naturally-occurring sentences from several domains (news, Reddit, Wikipedia, academic papers). This approaches differs from that of FraCaS, which was designed to test linguistic theories with a minimal and uniform set of examples. A sample from our dataset is shown in Table 3.Annotation Process We begin with a target set of phenomena, based roughly on those used in the FraCaS suite . We construct each example by locating a sentence that can be easily made to demonstrate a target phenomenon, and editing it in two ways to produce an appropriate sentence pair. We make minimal modifications so as to maintain high lexical and structural overlap within each sentence pair and limit superficial cues. We then label the inference relationships between the sentences, considering each sentence alternatively as the premise, producing two labeled examples for each pair (1100 total). Where possible, we produce several pairs with different labels for a single source sentence, to have minimal sets of sentence pairs that are lexically and structurally very similar but correspond to different entailment relationships. The ing labels are 42% entailment, 35% neutral, and 23% contradiction. Evaluation Since the class distribution in the diagnostic set is not balanced, we use R 3 , a three-class generalization of the Matthews correlation coefficient, for evaluation. In light of recent work showing that crowdsourced data often contains artifacts which can be exploited to perform well without solving the intended task BID7 BID3 , i.a.), we audit the data for such artifacts. We reproduce the methodology of , training two fastText classifiers to predict entailment labels on SNLI and MNLI using only the hypothesis as input. The models respectively get near-chance accuracies of 32.7% and 36.4% on our diagnostic data, showing that the data does not suffer from such artifacts. To establish human baseline performance on the diagnostic set, we have six NLP researchers annotate 50 sentence pairs (100 entailment examples) randomly sampled from the diagnostic set. Interannotator agreement is high, with a Fleiss's κ of 0.73. The average R 3 score among the annotators is 0.80, much higher than any of the baseline systems described in Section 5.Intended Use The diagnostic examples are hand-picked to address certain phenomena, and NLI is a task with no natural input distribution, so we do not expect performance on the diagnostic set to reflect overall performance or generalization in downstream applications. Performance on the analysis set should be compared between models but not between categories. The set is provided not as a benchmark, but as an analysis tool for error analysis, qualitative model comparison, and development of adversarial examples. For baselines, we evaluate a multi-task learning model trained on the GLUE tasks, as well as several variants based on recent pre-training methods. We briefly describe them here. See Appendix B for details. We implement our models in the AllenNLP library . Original code for the baselines is available at https://github.com/nyu-mll/GLUE-baselines and a newer version is available at https://github.com/jsalt18-sentence-repl/jiant.Architecture Our simplest baseline architecture is based on sentence-to-vector encoders, and sets aside GLUE's ability to evaluate models with more complex structures. Taking inspiration from , the model uses a two-layer, 1500D (per direction) BiLSTM with max pooling and 300D GloVe word embeddings (840B Common Crawl version; BID1 . For single-sentence tasks, we encode the sentence and pass the ing vector to a classifier. For sentence-pair tasks, we encode sentences independently to produce vectors u, v, and pass [u; v; |u − v|; u * v] to a classifier. The classifier is an MLP with a 512D hidden layer. We also consider a variant of our model which for sentence pair tasks uses an attention mechanism inspired by BID8 between all pairs of words, followed by a second BiLSTM with max pooling. By explicitly modeling the interaction between sentences, these models fall outside the sentence-to-vector paradigm. Pre-Training We augment our base model with two recent methods for pre-training: ELMo and CoVe. We use existing trained models for both. ELMo uses a pair of two-layer neural language models trained on the Billion Word Benchmark (Training We train our models with the BiLSTM sentence encoder and post-attention BiLSTMs shared across tasks, and classifiers trained separately for each task. For each training update, we sample a task to train with a probability proportional to the number of training examples for each task. We train our models with Adam with initial learning rate 10 −4 and batch size 128. We use the macro-average score as the validation metric and stop training when the learning rate drops below 10 −5 or performance does not improve after 5 validation checks. We also train a set of single-task models, which are configured and trained identically, but share no parameters. To allow for fair comparisons with the multi-task analogs, we do not tune parameter or training settings for each task, so these single-task models do not generally represent the state of the art for each task. Sentence Representation Models Finally, we evaluate the following trained sentence-to-vector encoder models using our benchmark: average bag-of-words using GloVe embeddings (CBoW), Skip-Thought , InferSent , DisSent , and GenSen BID11. For these models, we only train task-specific classifiers on the representations they produce. We train three runs of each model and evaluate the run with the best macro-average development set performance (see TAB8 in Appendix C). For single-task and sentence representation models, we evaluate the best run for each individual task. We present performance on the main benchmark tasks in TAB4.We find that multi-task training yields better overall scores over single-task training amongst models using attention or ELMo. Attention generally has negligible or negative aggregate effect in single task training, but helps in multi-task training. We see a consistent improvement in using ELMo embeddings in place of GloVe or CoVe embeddings, particularly for single-sentence tasks. Using CoVe has mixed effects over using only GloVe. Among the pre-trained sentence representation models, we observe fairly consistent gains moving from CBoW to Skip-Thought to Infersent and GenSen. Relative to the models trained directly on the GLUE tasks, InferSent is competitive and GenSen outperforms all but the two best. Looking at per task, we find that the sentence representation models substantially underperform on CoLA compared to the models directly trained on the task. On the other hand, for STS-B, models trained directly on the task lag significantly behind the performance of the best sentence representation model. Finally, there are tasks for which no model does particularly well. On WNLI, no model exceeds most-frequent-class guessing (65.1%) and we substitute the model predictions for the most-frequent baseline. On RTE and in aggregate, even our best baselines leave room for improvement. These early indicate that solving GLUE is beyond the capabilities of current models and methods. We analyze the baselines by evaluating each model's MNLI classifier on the diagnostic set to get a better sense of their linguistic capabilities. Results are presented in TAB6.Coarse Categories Overall performance is low for all models: The highest total score of 28 still denotes poor absolute performance. Performance tends to be higher on Predicate-Argument Structure and lower on Logic, though numbers are not closely comparable across categories. Unlike on the main benchmark, the multi-task models are almost always outperformed by their single-task counterparts. This is perhaps unsurprising, since with our simple multi-task training regime, there is likely some destructive interference between MNLI and the other tasks. The models trained on the GLUE tasks largely outperform the pretrained sentence representation models, with the exception of GenSen. Using attention has a greater influence on diagnostic scores than using ELMo or CoVe, which we take to indicate that attention is especially important for generalization in NLI.Fine-Grained Subcategories Most models handle universal quantification relatively well. Looking at relevant examples, it seems that relying on lexical cues such as "all" often suffices for good performance. Similarly, lexical cues often provide good signal in morphological negation examples. We observe varying weaknesses between models. Double negation is especially difficult for the GLUE-trained models that only use GloVe embeddings. This is ameliorated by ELMo, and to some degree CoVe. Also, attention has mixed effects on overall , and models with attention tend to struggle with downward monotonicity. Examining their predictions, we found that the models are sensitive to hypernym/hyponym substitution and word deletion as a signal of entailment, but predict it in the wrong direction (as if the substituted/deleted word were in an upward monotone context). This is consistent with recent findings by that these systems use the subsequence relation between premise and hypothesis as a heuristic shortcut. Restrictivity examples, which often depend on nuances of quantifier scope, are especially difficult for almost all models. Overall, there is evidence that going beyond sentence-to-vector representations, e.g. with an attention mechanism, might aid performance on out-of-domain data, and that transfer methods like ELMo and CoVe encode linguistic information specific to their supervision signal. However, increased representational capacity may lead to overfitting, such as the failure of attention models in downward monotone contexts. We expect that our platform and diagnostic dataset will be useful for similar analyses in the future, so that model designers can better understand their models' generalization behavior and implicit knowledge. We introduce GLUE, a platform and collection of resources for evaluating and analyzing natural language understanding systems. We find that, in aggregate, models trained jointly on our tasks see better performance than the combined performance of models trained for each task separately. We confirm the utility of attention mechanisms and transfer learning methods such as ELMo in NLU systems, which combine to outperform the best sentence representation models on the GLUE benchmark, but still leave room for improvement. When evaluating these models on our diagnostic dataset, we find that they fail (often spectacularly) on many linguistic phenomena, suggesting possible avenues for future work. In sum, the question of how to design general-purpose NLU models remains unanswered, and we believe that GLUE can provide fertile soil for addressing this challenge. A ADDITIONAL BENCHMARK DETAILS QNLI To construct a balanced dataset, we select all pairs in which the most similar sentence to the question was not the answer sentence, as well as an equal amount of cases in which the correct sentence was the most similar to the question, but another distracting sentence was a close second. Our similarity metric is based on CBoW representations with pre-trained GloVe embeddings. This approach to converting pre-existing datasets into NLI format is closely related to recent work by BID16, as well as to the original motivation for textual entailment presented by. Both argue that many NLP tasks can be productively reduced to textual entailment. We implement our attention mechanism as follows: given two sequences of hidden states u 1, u 2,..., u M and v 1, v 2,..., v N, we first compute matrix H where H ij = u i · v j. For each u i, we get attention weights α i by taking a softmax over the i th row of H, and get the corresponding context vectorṽ i = j α ij v j by taking the attention-weighted sum of the v j. We pass a second BiLSTM with max pooling over the sequence [u 1 ;ṽ 1],... [u M ;ṽ M] to produce u. We process the v j vectors analogously to obtain v. Finally, we feed [u ; v ; |u − v |; u * v] into a classifier. We train our models with the BiLSTM sentence encoder and post-attention BiLSTMs shared across tasks, and classifiers trained separately for each task. For each training update, we sample a task to train with a probability proportional to the number of training examples for each task. We scale each task's loss inversely proportional to the number of examples for that task, which we found to improve overall performance. We train our models with Adam with initial learning rate 10 −3, batch size 128, and gradient clipping. We use macro-average score over all tasks as our validation metric, and perform a validation check every 10k updates. We divide the learning rate by 5 whenever validation performance does not improve. We stop training when the learning rate drops below 10 −5 or performance does not improve after 5 validation checks. We evaluate the following sentence representation models:1. CBoW, the average of the GloVe embeddings of the tokens in the sentence., a BiLSTM with max-pooling trained to predict the discourse marker (because, so, etc.) relating two sentences on data derived from TBC. We use the variant trained for eight-way classification. 5. GenSen BID11, a sequence-to-sequence model trained on a variety of supervised and unsupervised objectives. We use the variant of the model trained on both MNLI and SNLI, the Skip-Thought objective on TBC, and a constituency parsing objective on the Billion Word Benchmark. We train task-specific classifiers on top of frozen sentence encoders, using the default parameters from SentEval. See https://github.com/nyu-mll/SentEval for details and code. The GLUE website limits users to two submissions per day in order to avoid overfitting to the private test data. To provide a reference for future work on GLUE, we present the best development set achieved by our baselines in TAB8. GLUE's online platform is built using React, Redux and TypeScript. We use Google Firebase for data storage and Google Cloud Functions to host and run our grading script when a submission is made. FIG1 shows the visual presentation of our baselines on the leaderboard. Table 7: Diagnostic dataset statistics by coarse-grained category. Note that some examples may be tagged with phenomena belonging to multiple categories. The dataset is designed to allow for analyzing many levels of natural language understanding, from word meaning and sentence structure to high-level reasoning and application of world knowledge. To make this kind of analysis feasible, we first identify four broad categories of phenomena: Lexical Semantics, Predicate-Argument Structure, Logic, and Knowledge. However, since these categories are vague, we divide each into a larger set of fine-grained subcategories. Descriptions of all of the fine-grained categories are given in the remainder of this section. These categories are just one lens that can be used to understand linguistic phenomena and entailment, and there is certainly room to argue about how examples should be categorized, what the categories should be, etc. These categories are not based on any particular linguistic theory, but broadly based on issues that linguists have often identified and modeled in the study of syntax and semantics. The dataset is provided not as a benchmark, but as an analysis tool to paint in broad strokes the kinds of phenomena a model may or may not capture, and to provide a set of examples that can serve for error analysis, qualitative model comparison, and development of adversarial examples that expose a model's weaknesses. Because the distribution of language is somewhat arbitrary, it will not be helpful to compare performance of the same model on different categories. Rather, we recommend comparing performance that different models score on the same category, or using the reported scores as a guide for error analysis. We show coarse-grain category counts and label distributions of the diagnostic set in Table 7. These phenomena center on aspects of word meaning. Lexical Entailment Entailment can be applied not only on the sentence level, but the word level. For example, we say "dog" lexically entails "animal" because anything that is a dog is also an animal, and "dog" lexically contradicts "cat" because it is impossible to be both at once. This relationship applies to many types of words (nouns, adjectives, verbs, many prepositions, etc.) and the relationship between lexical and sentential entailment has been deeply explored, e.g., in systems of natural logic. This connection often hinges on monotonicity in language, so many Lexical Entailment examples will also be tagged with one of the Monotone categories, though we do not do this in every case (see Monotonicity, under Logic).Morphological Negation This is a special case of lexical contradiction where one word is derived from the other: from "affordable" to "unaffordable", "agree" to "disagree", etc. We also include examples like "ever" and "never". We also label these examples with Negation or Double Negation, since they can be viewed as involving a word-level logical negation. Factivity Propositions appearing in a sentence may be in any entailment relation with the sentence as a whole, depending on the context in which they appear. In many cases, this is determined by lexical triggers (usually verbs or adverbs) in the sentence. For example,• "I recognize that X" entails "X".• "I did not recognize that X" entails "X".• "I believe that X" does not entail "X".• "I am refusing to do X" contradicts "I am doing X".• "I am not refusing to do X" does not contradict "I am doing X".• "I almost finished X" contradicts "I finished X".• "I barely finished X" entails "I finished X".Constructions like "I recognize that X" are often called factive, since the entailment (of X above, regarded as a presupposition) persists even under negation. Constructions like "I am refusing to do X" above are often called implicative, and are sensitive to negation. There are also cases where a sentence (non-)entails the existence of an entity mentioned in it, for example "I have found a unicorn" entails "A unicorn exists" while "I am looking for a unicorn" doesn't necessarily entail "A unicorn exists". Readings where the entity does not necessarily exist are often called intensional readings, since they seem to deal with the properties denoted by a description (its intension) rather than being reducible to the set of entities that match the description (its extension, which in cases of non-existence will be empty).We place all examples involving these phenomena under the label of Factivity. While it often depends on context to determine whether a nested proposition or existence of an entity is entailed by the overall statement, very often it relies heavily on lexical triggers, so we place the category under Lexical Semantics. Symmetry/Collectivity Some propositions denote symmetric relations, while others do not. For example, "John married Gary" entails "Gary married John" but "John likes Gary" does not entail "Gary likes John". Symmetric relations can often be rephrased by collecting both arguments into the subject: "John met Gary" entails "John and Gary met". Whether a relation is symmetric, or admits collecting its arguments into the subject, is often determined by its head word (e.g., "like", "marry" or "meet"), so we classify it under Lexical Semantics. Redundancy If a word can be removed from a sentence without changing its meaning, that means the word's meaning was more-or-less adequately expressed by the sentence; so, identifying these cases reflects an understanding of both lexical and sentential semantics. Named Entities Words often name entities that exist in the world. There are many different kinds of understanding we might wish to understand about these names, including their compositional structure (for example, the "Baltimore Police" is the same as the "Police of the City of Baltimore") or their real-world referents and acronym expansions (for example, "SNL" is "Saturday Night Live"). This category is closely related to World Knowledge, but focuses on the semantics of names as lexical items rather than knowledge about their denoted entities. Quantifiers Logical quantification in natural language is often expressed through lexical triggers such as "every", "most", "some", and "no". While we reserve the categories in Quantification and Monotonicity for entailments involving operations on these quantifiers and their arguments, we choose to regard the interchangeability of quantifiers (e.g., in many cases "most" entails "many") as a question of lexical semantics. An important component of understanding the meaning of a sentence is understanding how its parts are composed together into a whole. In this category, we address issues across that spectrum, from syntactic ambiguity to semantic roles and coreference. Syntactic Ambiguity: Relative Clauses, Coordination Scope These two categories deal purely with resolving syntactic ambiguity. Relative clauses and coordination scope are both sources of a great amount of ambiguity in English. Prepositional phrases Prepositional phrase attachment is a particularly difficult problem that syntactic parsers in NLP systems continue to struggle with. We view it as a problem both of syntax and semantics, since prepositional phrases can express a wide variety of semantic roles and often semantically apply beyond their direct syntactic attachment. Core Arguments Verbs select for particular arguments, particularly subjects and objects, which might be interchangeable depending on the context or the surface form. One example is the ergative alternation: "Jake broke the vase" entails "the vase broke" but "Jake broke the vase" does not entail "Jake broke". Other rearrangements of core arguments, such as those seen in Symmetry/Collectivity, also fall under the Core Arguments label. Alternations: Active/Passive, Genitives/Partitives, Nominalization, Datives All four of these categories correspond to syntactic alternations that are known to follow specific patterns in English:• Active/Passive: "I saw him" is equivalent to "He was seen by me" and entails " He was seen".• Genitives/Partitives: "the elephant's foot" is the same thing as "the foot of the elephant".• Nominalization: "I caused him to submit his resignation" entails "I caused the submission of his resignation".• Datives: "I baked him a cake" entails "I baked a cake for him" and "I baked a cake" but not "I baked him".Ellipsis/Implicits Often, the argument of a verb or other predicate is omitted (elided) in the text, with the reader filling in the gap. We can construct entailment examples by explicitly filling in the gap with the correct or incorrect referents. For example, the premise "Putin is so entrenched within Russias ruling system that many of its members can imagine no other leader" entails "Putin is so entrenched within Russias ruling system that many of its members can imagine no other leader than Putin" and contradicts "Putin is so entrenched within Russias ruling system that many of its members can imagine no other leader than themselves." This is often regarded as a special case of anaphora, but we decided to split out these cases from explicit anaphora, which is often also regarded as a case of coreference (and attempted to some degree in modern coreference resolution systems).Anaphora/Coreference Coreference refers to when multiple expressions refer to the same entity or event. It is closely related to Anaphora, where the meaning of an expression depends on another (antecedent) expression in context. These two phenomena have significant overlap; for example, pronouns ("she", "we", "it") are anaphors that are co-referent with their antecedents. However, they also may occur independently, such as coreference between two definite noun phrases (e.g., "Theresa May "and the "British Prime Minister") that refer to the same entity, or anaphora from a word like "other" which requires an antecedent to distinguish something from. In this category we only include cases where there is an explicit phrase (anaphoric or not) that is co-referent with an antecedent or other phrase. We construct examples for these in much the same way as for Ellipsis/Implicits. Intersectivity Many modifiers, especially adjectives, allow non-intersective uses, which affect their entailment behavior. For example:• Intersective: "He is a violinist and an old surgeon" entails "He is an old violinist" and "He is a surgeon".• Non-intersective: "He is a violinist and a skilled surgeon" does not entail "He is a skilled violinist".• Non-intersective: "He is a fake surgeon" does not entail "He is a surgeon".Generally, an intersective use of a modifier, like "old" in "old men", is one which may be interpreted as referring to the set of entities with both properties (they are old and they are men). Linguists often formalize this using set intersection, hence the name. Intersectivity is related to Factivity. For example, "fake" may be regarded as a counter-implicative modifier, and these examples will be labeled as such. However, we choose to categorize intersectivity under predicate-argument structure rather than lexical semantics, because generally the same word will admit both intersective and non-intersective uses, so it may be regarded as an ambiguity of argument structure. Restrictivity Restrictivity is most often used to refer to a property of uses of noun modifiers. In particular, a restrictive use of a modifier is one that serves to identify the entity or entities being described, whereas a non-restrictive use adds extra details to the identified entity. The distinction can often be highlighted by entailments:• Restrictive: "I finished all of my homework due today" does not entail "I finished all of my homework".• Non-restrictive: "I got rid of all those pesky bedbugs" entails "I got rid of all those bedbugs".Modifiers that are commonly used non-restrictively are appositives, relative clauses starting with "which" or "who", and expletives (e.g. "pesky"). Non-restrictive uses can appear in many forms. With an understanding of the structure of a sentence, there is often a baseline set of shallow that can be drawn using logical operators and often modeled using the mathematical tools of logic. Indeed, the development of mathematical logic was initially guided by questions about natural language meaning, from Aristotelian syllogisms to Fregean symbols. The notion of entailment is also borrowed from mathematical logic. Propositional Structure: Negation, Double Negation, Conjunction, Disjunction, Conditionals All of the basic operations of propositional logic appear in natural language, and we tag them where they are relevant to our examples:• Negation: "The cat sat on the mat" contradicts "The cat did not sit on the mat".• Double negation: "The market is not impossible to navigate" entails "The market is possible to navigate". | We present a multi-task benchmark and analysis platform for evaluating generalization in natural language understanding systems. | 1,233 | scitldr |
A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success. This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others' success, and credit-assignment for interactions between actions and goals of different agents. To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment. We use a function augmentation scheme to bridge value and policy functions across the curriculum. The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment. Many real-world scenarios that require cooperation among multiple autonomous agents are multi-goal multi-agent control problems: each agent needs to achieve its own individual goal, but the global optimum where all agents succeed is only attained when agents cooperate to allow the success of other agents. In autonomous driving, multiple vehicles must execute cooperative maneuvers when their individual goal locations and nominal trajectories are in conflict (e.g., double lane merges) . In social dilemmas, mutual cooperation has higher global payoff but agents' individual goals may lead to defection out of fear or greed . Even settings with a global objective that seem unfactorizable can be formulated as multi-goal problems: in Starcraft II micromanagement, a unit that gathers resources must not accidentally jeopardize a teammate's attempt to scout the opponent base ; in traffic flow optimization, different intersection controllers may have local throughput goals but must cooperate for high global performance . While the framework of multi-agent reinforcement learning (MARL) (; ;) has been equipped with methods in deep reinforcement learning (RL) and shown promise on high-dimensional problems with complex agent interactions (; ; ; ;), learning multi-agent cooperation in the multi-goal scenario involves significant open challenges. First, given that exploration is crucial for RL and even more so in MARL with larger state and joint action spaces, how should agents explore to learn both individual goal attainment and cooperation for others' success? Uniform random exploration is common in deep MARL but can be highly inefficient as the value of cooperative actions may be discoverable only in small regions of state space where cooperation is needed. Furthermore, the conceptual difference between attaining one's own goal and cooperating for others' success calls for more modularized and targeted approaches. Second, while there are methods for multi-agent credit assignment when all agents share a single goal (i.e., a global reward) (; ;), and while one could treat the cooperative multi-goal scenario as a problem with a single joint goal, this coarse approach makes it extremely difficult to evaluate the impact of an agent's action on another agent's success. Instead, the multi-goal scenario can benefit from fine-grained credit assignment that leverages available structure in action-goal interactions, such as local interactions where only few agents affect another agent's goal attainment at any time. Given these open challenges, our paper focuses on the cooperative multi-goal multi-agent setting where each agent is assigned a goal 1 and must learn to cooperate with other agents with possibly different goals. To tackle the problems of efficient exploration and credit assignment in this complex problem setting, we develop CM3, a novel general framework involving three synergistic components: 1. We approach the difficulty of multi-agent exploration from a novel curriculum learning perspective, by first training an actor-critic pair to achieve different goals in an induced single-agent setting (Stage 1), then using them to initialize all agents in the multi-agent environment (Stage 2). The key insight is that agents who can already act toward individual objectives are better prepared for discovery of cooperative solutions with additional exploration once other agents are introduced. In contrast to hierarchical learning where sub-goals are selected sequentially in time , all agents act toward their goals simultaneously in Stage 2 of our curriculum. 2. Observing that a wide array of complex MARL problems permit a decomposition of agents' observations and state vectors into components of self, others, and non-agent specific environment information , we employ function augmentation to bridge Stages 1-2: we reduce the number of trainable parameters of the actor-critic in Stage 1 by limiting their input space to the part that is sufficient for single-agent training, then augment the architecture in Stage 2 with additional inputs and trainable parameters for learning in the multi-agent environment. 3. We propose a credit function, which is an action-value function that specifically evaluates actiongoal pairs, for localized credit assignment in multi-goal MARL. We use it to derive a multi-goal multi-agent policy gradient for Stage 2. In synergy with the curriculum, the credit function is constructed via function augmentation from the critic in Stage 1. We evaluate our method on challenging multi-goal multi-agent environments with high-dimensional state spaces: cooperative navigation with difficult formations, double lane merges in the SUMO simulator , and strategic teamwork in a Checkers game. CM3 solved all domains significantly faster than IAC and COMA , and solved four out of five environments significantly faster than QMIX . Exhaustive ablation experiments show that the combination of all three components is crucial for CM3's overall high performance. While early theoretical work analyzed Markov games in discrete state and action spaces (; ;), recent literature have leveraged techniques from deep RL to develop general algorithms for high dimensional environments with complex agent interactions (; ;), which pose difficulty for traditional methods that do not generalize by learning interactions . Cooperative multi-agent learning is important since many real-world problems can be formulated as distributed systems in which decentralized agents must coordinate to achieve shared objectives . The multi-agent credit assignment problem arises when agents share a global reward . While credit assignment be resolved when independent individual rewards are available , this may not be suitable for the fully cooperative setting: showed that agents whose rewards depend on the success of other agents can cooperate better than agents who optimize for their own success. In the special case when all agents have a single goal and share a global reward, COMA et al., 2019) apply to agents with different rewards, they do not address multi-goal cooperation as they do not distinguish between cooperation and competition, despite the fundamental difference. Multi-goal MARL was considered in , who analyzed convergence in a special networked setting restricted to fully-decentralized training, while we conduct centralized training with decentralized execution . In contrast to multi-task MARL, which aims for generalization among non-simultaneous tasks , and in contrast to hierarchical methods that sequentially select subtasks , our decentralized agents must cooperate concurrently to attain all goals. Methods for optimizing high-level agent-task assignment policies in a hierarchical framework are complementary to our work, as we focus on learning low-level cooperation after goals are assigned. Prior application of curriculum learning to MARL include a single cooperative task defined by the number of agents and the probability of agent appearance , without explicit individual goals. instantiate new neural network columns for task transfer in single-agent RL. Techniques in transfer learning are complementary to our novel curriculum approach to MARL. In multi-goal MARL, each agent should achieve a goal drawn from a finite set, cooperate with other agents for collective success, and act independently with limited local observations. We formalize the problem as an episodic multi-goal Markov game, review an actor-critic approach to centralized training of decentralized policies, and summarize counterfactual-based multi-agent credit assignment. Multi-goal Markov games. A multi-goal Markov game is a tuple S, {O n}, {A n}, P, R, G, N, γ with N agents labeled by n ∈ [N]. In each episode, each agent n has one fixed goal g n ∈ G that is known only to itself. At time t and global state s t ∈ S, each agent n receives an observation o n t:= o n (s t) ∈ O n and chooses an action a n t ∈ A n. The environment moves to s t+1 due to joint action a t:= {a 1 t, . . ., a N t}, according to transition probability P (s t+1 |s t, a t). Each agent receives a reward R n t:= R(s t, a t, g n), and the learning task is to find stochastic decentralized policies π n: O n × G × A n →, conditioned only on local observations and goals, to maximize where γ ∈ and joint policy π factorizes as π(a|s, g):= N n=1 π n (a n |o n, g n) due to decentralization. Let a −n and g −n denote all agents' actions and goals, respectively, except that of agent n. Let boldface a and g denote the joint action and joint goals, respectively. For brevity, let π(a n):= π n (a n |o n, g n). This model covers a diverse set of cooperation problems in the literature , without constraining how the attainability of a goal depends on other agents: at a traffic intersection, each vehicle can easily reach its target location if not for the presence of other vehicles; in contrast, agents in a strategic game may not be able to maximize their rewards in the absence of cooperators . Centralized learning of decentralized policies. A centralized critic that receives full state-action information can speed up training of decentralized actors that receive only local information . Directly extending the single-goal case, for each n ∈ [1..N] in a multigoal Markov game, critics are represented by the value function V π n (s):= E π ∞ t=0 γ t R n t s 0 = s and the action-value function Q π n (s, a):= E π ∞ t=0 γ t R n t s 0 = s, a 0 = a, which evaluate the joint policy π against the reward R n for each goal g n. Multi-agent credit assignment. In MARL with a single team objective, COMA addresses credit assignment by using a counterfactual baseline in an advantage function (, Lemma 1), which evaluates the contribution of a chosen action a n versus the average of all possible counterfactualsâ n, keeping a −n fixed. The analysis in for a formally equivalent action-dependent baseline in RL suggests that COMA is a low-variance estimator for single-goal MARL. We derive its variance in Appendix C.1. However, COMA is unsuitable for credit assignment in multi-goal MARL, as it would treat the collection of goals g as a global goal and only learn from total reward, making it extremely difficult to disentangle each agent's impact on other agents' goal attainment. Furthermore, a global Q-function does not explicitly capture structure in agents' interactions, such as local interactions involving a limited number of agents. We substantiate these arguments by experimental in Section 6. We describe the complete CM3 learning framework as follows. First we define a credit function as a mechanism for credit assignment in multi-goal MARL, then derive a new cooperative multi-goal policy gradient with localized credit assignment. Next we motivate the possibility of significant training speedup via a curriculum for multi-goal MARL. We describe function augmentation as a mechanism for efficiently bridging policy and value functions across the curriculum stages, and finally synthesize all three components into a synergistic learning framework. If all agents take greedy goal-directed actions that are individually optimal in the absence of other agents, the joint action can be sub-optimal (e.g. straight-line trajectory towards target in traffic). Instead rewarding agents for both individual and collective success can avoid such bad local optima. A naïve approach based on previous works would evaluate the joint action a via a global Q-function Q π n (s, a) for each agent's goal g n, but this does not precisely capture each agent's contribution to another agent's attainment of its goal. Instead, we propose an explicit mechanism for credit assignment by learning an additional function Q π n (s, a m) that evaluates pairs of action a m and goal g n, for use in a multi-goal actor-critic algorithm. We define this function and show that it satisfies the classical relation needed for sample-based model-free learning. Definition 1. For n, m ∈ [N], s ∈ S, the credit function for goal g n and a m ∈ A m by agent m is:, the credit function satisfies the following relations: Derivations are given in Appendix B.1, including the relation between Q π n (s, a m) and Q π n (s, a). Equation takes the form of the Bellman expectation equation, which justifies learning the credit function, parameterized by θ Qc, by optimizing the standard loss function in deep RL: While centralized training means the input space scales linearly with agent count, many practical environments involving only local interactions between agents allows centralized training with few agents while retaining decentralized performance when deployed at scale (evidenced in Appendix E). We use the credit function as a critic within a policy gradient for multi-goal MARL. Letting θ parameterize π, the overall objective J(π) is maximized by ascending the following gradient: Proposition 2. The cooperative multi-goal credit function based MARL policy gradient is This is derived in Appendix B.2. For a fixed agent m, the inner summation over n considers all agents' goals g n and updates m's policy based on the advantage of a m over all counterfactual actionŝ a m, as measured by the credit function for g n. The strength of interaction between action-goal pairs is captured by the extent to which Q π n (s,â m) varies withâ m, which directly impacts the magnitude of the gradient on agent m's policy. For example, strong interaction in non-constant Q π n (s, ·), which implies larger magnitude of A π n,m and larger weight on ∇ θ log π(a m). The double summation accounts for first-order interaction between all action-goal pairs, but complexity can be reduced by omitting terms when interactions are known to be sparse, and our empirical runtimes are on par with other methods due to efficient batch computation (Appendix F). As the second term in A π n,m is a baseline, the reduction of variance can be analyzed similarly to that for COMA, given in Appendix C.2.), ablation show stability improvement due to the credit function (Section 6). As the credit function takes in a single agent's action, it synergizes with both CM3's curriculum and function augmentation as described in Section 4.5. Multi-goal MARL poses a significant challenge for exploration. Random exploration can be highly inefficient for concurrently learning both individual task completion and cooperative behavior. Agents who cannot make progress toward individual goals may rarely encounter the region of state space where cooperation is needed, rendering any exploration useless for learning cooperative behavior. On the other extreme, exploratory actions taken in situations that require precise coordination can easily lead to penalties that cause agents to avoid the coordination problem and fail to achieve individual goals. Instead, we hypothesize and confirm in experiments that agents who can achieve individual goals in the absence of other agents can more reliably produce state configurations where cooperative solutions are easily discovered with additional exploration in the multi-agent environment 2. We propose a MARL curriculum that first solves a single-agent Markov decision process (MDP), as preparation for subsequent exploration speedup. Given a cooperative multi-goal Markov game MG, we induce an MDP M to be the tuple S n, O n, A n, P n, R, γ, where an agent n is selected to be the single agent in M. Entities S n, P n, and R are defined by removing all dependencies on agent interactions, so that only components depending on agent n remain. This reduction to M is possible in almost all fully cooperative multi-agent environments used in a large body of work 3 , precisely because they support a variable number of agents, including N = 1. Important real-world settings that allow this reduction include autonomous driving, multi traffic light control, and warehouse commissioning (removing all but one car/controller/robot, respectively, from the environment). Given a full Markov game implementation, the reduction involves only deletion of components associated with all other agents from state vectors (since an agent is uniquely defined by its attributes), deletion of if-else conditions from the reward function corresponding to agent interactions, and likewise from the transition function if a simulation is used. Appendix G provides practical guidelines for the reduction. Based on M, we define a greedy policy for MG. Definition 2. A greedy policy π n by agent n for cooperative multi-goal MG is defined as the optimal policy π * for the induced MDP M where only agent n is present. This naturally leads to our proposed curriculum: Stage 1 trains a single agent in M to achieve a greedy policy, which is then used for initialization in MG in Stage 2. Next we explain in detail how to leverage the structure of decentralized MARL to bridge the two curriculum stages. In Markov games with decentralized execution, an agent's observation space decomposes into self captures the agent's own properties, which must be observable by the agent for closed-loop control, while o n others ∈ O n others is the agent's egocentric observation of other agents. In our work, egocentric observations are private and not accessible by other agents . Similarly, global state s decomposes into s:= (s env, s n, s −n), where Function augmentation π π 1 Figure 1: In Stage 1, Q 1 and π 1 learn to achieve multiple goals in a single-agent environment. Between Stage 1 and 2, π is constructed from the trained π 1 and a new module π 2 according to (same construction is done for Q n (s, a) and Q n (s, a m), not shown). In the multi-agent environment of Stage 2, these augmented functions are instantiated for each of N agents (with parameter-sharing). s env is environment information not specific to any agent (e.g., position of a landmark), and s n captures agent n's information. While this decomposition is implicitly available in a wide range of complex multi-agent environments (; ; ; ; ;), we explicitly use it to implement our curriculum. In Stage 1, as the ability to process o n others and s −n is unnecessary, we reduce the input space of policy and value functions, thereby reducing the number of trainable parameters and lowering the computation cost. In Stage 2, we restore Stage 1 parameters and activate new modules to process additional inputs o n others and s −n. This augmentation is especially suitable for efficiently learning the credit function and global Q-function, since Q(s, a) can be augmented into both Q π n (s, a) and Q π n (s, a m), as explained below. We combine the preceding components to create CM3, using deep neural networks for function approximation (Figure 1 and Algorithm 1). Without loss of generality, we assume parameter-sharing among homogeneous agents with goals as input . The inhomogeneous case can be addressed by N actor-critics. Drawing from multi-task learning , we sample goal(s) in each episode for the agent(s), to train one model for all goals. Stage 1. We train an actor π 1 (a|o, g) and critic Q 1 (s 1, a, g) to convergence according to and Stage 2. The Markov game is instantiated with all N agents. We restore the trained π 1 parameters, instantiate a second neural network π 2 for agents to process o n others, and connect the output of π 2 to a selected hidden layer of π 1. Concretely, let h Being restored from Stage 1, not re-initialized, hidden layers i < i * begin with the ability to process (o n self, g n), while the new weights in π 2 and W 1:2 specifically learn the effect of surrounding agents. Higher layers i ≥ i * that already take greedy actions to achieve goals in Stage 1 must now do so while cooperating to allow other agents' success. This augmentation scheme is simplest for deep policy and value networks using fully-connected or convolutional layers. The middle panel of Figure 1 depicts the construction of π from π 1 and π 2. The global Q π (s, a, g n) is constructed from Q 1 similarly: when the input to Q 1 is (s env, s n, a n, g n), a new module takes input (s −n, a −n) and connects to a chosen hidden layer of 4 Setting i * to be the last hidden layer worked well in our experiments, without needing to tune. augmented from a copy of Q 1, such that when We train the policy using, train the credit function with loss, and train the global Q-function with the joint-action analogue of. We investigated the performance and robustness of CM3 versus existing methods on diverse and challenging multi-goal MARL environments: cooperative navigation in difficult formations, double lane merge in autonomous driving, and strategic cooperation in a Checkers game. We evaluated ablations of CM3 on all domains. We describe key setup here, with full details in Appendices G to J. Cooperative navigation: We created three variants of the cooperative navigation scenario in , where N agents cooperate to reach a set of targets. We increased the difficulty by giving each agent only an individual reward based on distance to its designated target, not a global team reward, but initial and target positions require complex cooperative maneuvers to avoid collision penalties (Figure 3). Agents observe relative positions and velocities (details in Appendix G.1). SUMO: Previous work modeled autonomous driving tasks as MDPs in which all other vehicles do not learn to respond to a single learning agent . However, real-world driving requires cooperation among different drivers' with personal goals. Built in the SUMO traffic simulator with sublane resolution , this experiment requires agent vehicles to learn double-merge maneuvers to reach goal lane assignments (Figure 4). Agents have limited field of view and receive sparse rewards (Appendix G.2). Checkers: We implemented a challenging strategic game (Appendix G.3, an extension of), to investigate whether CM3 is beneficial even when an agent cannot maximize its reward in the absence of another agent. In a gridworld with red and yellow squares that disappear when collected (Figure 2), Agent A receives +1 for red and -0.5 for yellow; Agent B receives -0.5 for red and +1 for yellow. Both have a limited 5x5 field of view. The global optimum requires each agent to clear the path for the other. Algorithm implementations. We describe key points here, leaving complete architecture details and hyperparameter tables to Appendices H and I. CM3: Stage 1 is defined for each environment as follows (Appendix G): in cooperative navigation, a single particle learns to reach any specified landmark; in SUMO, a car learns to reach any specified goal lane; in Checkers, we alternate between training one agent as A and B. Appendix H describes function augmentation in Stage 2 of CM3. COMA : the joint goal g and total reward n R n can be used to train COMA's global Q function, which receives input (s, o n, g n, n, a −n, g −n). Each output node i represents Q(s, a n = i, a −n, g). IAC : IAC trains each agent's actor and critic independently, using the agent's own observation. The TD error of value function V (o n, g n) is used in a standard policy gradient . QMIX : we used the original hypernetwork, giving all goals to the mixer and individual goals to each agent network. We used a manual coordinate descent on exploration and learning rate hyperparameters, including values reported in the original works. We ensured the number of trainable parameters are similar among all methods, up to method-specific architecture requirements for COMA and QMIX. Ablations. We conducted ablation experiments in all domains. To discover the speedup from the curriculum with function augmentation, we trained the full Stage 2 architecture of CM3 (labeled as Direct) without first training components π 1 and Q 1 in an induced MDP. To investigate the benefit of the new credit function and multi-goal policy gradient, we trained an ablation (labeled QV) with, where credit assignment between action-goal pairs is lost. QV uses the same π 1, Q 1, and function augmentation as CM3. CM3 finds optimal or near-optimal policies significantly faster than IAC and COMA on all domains, and performs significantly higher than QMIX in four out of five. We report absolute runtime in Appendix F and account for CM3's Stage 1 episodes (Appendix J) when comparing sample efficiency. Main comparison. Over all cooperative navigation scenarios (Figures 5a to 5c), CM3 (with 1k episodes in Stage 1) converged more than 15k episodes faster than IAC. IAC reached the same final performance as CM3 because dense individual rewards simplifies the learning problem for IAC's fully decentralized approach, but CM3 benefited significantly from curriculum learning, as evidenced by comparison to "Direct" in Figure 5f. QMIX and COMA settled at suboptimal behavior. Both learn global critics that use all goals as input, in contrast to CM3 and IAC that process each goal separately. This indicates the difficulty of training agents for individual goals under a purely global approach. While COMA was shown to outperform IAC in SC2 micromanagement where IAC must learn from a single team reward , our IAC agents have access to individual rewards that resolve the credit assignment issue and improve performance . In SUMO (Figure 5d), CM3 and QMIX found cooperative solutions with performances within the margin of error, while COMA and IAC could not break out of local optima where vehicles move straight but do not perform merge maneuvers. Since initial states force agents into the region of state space requiring cooperation, credit assignment rather than exploration is the dominant challenge, which CM3 addressed via the credit function, as evidenced in Figure 5i. IAC underperformed because SUMO requires a longer sequence of cooperative actions and gave much sparser rewards than the "Merge" scenario in cooperative navigation. We also show that centralized training of merely two decentralized agents allows them to generalize to settings with much heavier traffic (Appendix E). In Checkers (Figure 5e), CM3 (with 5k episodes in Stage 1) converged 10k episodes faster than COMA and QMIX to the global optimum with score 24. Both exploration of the combinatorially large joint trajectory space and credit assignment for path clearing are challenges that CM3 successfully addressed. COMA only solved Checkers among all domains, possibly because the small bounded environment alleviates COMA's difficulty with individual goals in large state spaces. IAC underperformed all centralized learning methods because cooperative actions that give no instantaneous reward are hard for selfish agents to discover in Checkers. These demonstrate CM3's ability to attain individual goals and find cooperative solutions in diverse multi-agent systems. Ablations. The significantly better performance of CM3 versus "Direct" (Figures 5f to 5j) shows that learning individual goal attainment prior to learning multi-agent cooperation, and initializing Stage 2 with Stage 1 parameters, are crucial for improving learning speed and stability. It gives evidence that while global action-value and credit functions may be difficult to train from scratch, function augmentation significantly eases the learning problem. While "QV" initially learns quickly to attain individual goals, it does so at the cost of frequent collisions, higher variance, and inability to maintain a cooperative solution, giving clear evidence for the necessity of the credit function. We presented CM3, a general framework for cooperative multi-goal MARL. CM3 addresses the need for efficient exploration to learn both individual goal attainment and cooperation, via a two-stage curriculum bridged by function augmentation. It achieves local credit assignment between action and goals using a credit function in a multi-goal policy gradient. In diverse experimental domains, CM3 attains significantly higher performance, faster learning, and overall robustness than existing MARL methods, displaying strengths of both independent learning and centralized credit assignment while avoiding shortcomings of existing methods. Ablations demonstrate each component is crucial to the whole framework. Our motivate future work on analyzing CM3's theoretical properties and generalizing to inhomogeneous systems or settings without known goal assignments. Hernandez-Leal, P., Kartal, B., and Taylor Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Korjus, K., Aru, J., Aru, J., and Vicente, R. Instantiate N > 1 agents 8: Set all target network weights to equal main networks weights 13: Initialize exploration parameter = start and empty replay buffer B for each training episode e = 1 to E do for t = 1 to T do // execute policies in environment Sample action a n t ∼ π(a n t |o n t ; θ π,) for each agent. Compute global target for all n: Gradient descent on L(θ Qg) = if c = 1 then 29: end if 35: Update policy: Update all target network parameters using: Reset buffer B Off-policy training with a large replay buffer allows RL algorithms to benefit from less correlated transitions . The algorithmic modification for off-policy training is to maintain a circular replay buffer that does not reset (i.e. remove line 38), and conduct training (lines 24-41) while executing policies in the environment (lines 17-22). Despite introducing bias in MARL, we found that off-policy training benefited CM3 in SUMO and Checkers. By stationarity and relabeling t, the credit function can be written: Using the law of iterated expectation, the credit function satisfies the Bellman expectation equation: The goal-specific joint value function is the marginal of the credit function: The credit function can be expressed in terms of the goal-specific action-value function: First we state some elementary relations between global functions V π n (s) and Q π n (s, a). These carry over directly from the case of an MDP, by treating the joint policy π as as an effective "single-agent" policy and restricting attention to a single goal g n (standard derivations are included at the end of this section). We follow the proof of the policy gradient theorem : We can replace Q π n (s, a) by the advantage function A π n (s, a):= Q π n (s, a) − V π n (s), which does not change the expectation in Equation because: So the gradient can be written Recall that from, for any choice of agent label k ∈ [1..N]: Then substituting into: Now notice that the choice of k in is completely arbitrary, since holds for any k ∈ [1..N]. Therefore, it is valid to distribute A π n,k (s, a) into the summation in using the summation index m instead of k. Further summing over all n, we arrive at the of Proposition 2: The relation between V π n (s) and Q π n (s, a) in and are derived as follows: Let Q:= Q π (s, a, g) denote the centralized Q function, let π(a n):= π(a n |o n, g n) denote a single agent's policy, and let π(a −n):= π(a −n |o −n, g −n) denote the other agents' joint policy. In cooperative multi-goal MARL, the direct application of COMA has the following gradient. Define the following: f n ]. its variance can be derived to be : The CM3 gradient can be rewritten as As before, z m:= ∇ θ log π(a m). Define h nm:= z m (Q n − b nm (s)) and let h n:= m h nm. Then the variance is A greedy initialization can provide significant improvement in multi-agent exploration versus naïve random exploration, as shown by a simple thought experiment. Consider a two-player MG defined by a 4 × 3 gridworld with unit actions (up, down, left, right). Agent A starts at with goal, while agent B starts at with goal. The greedy policy for each agent in MG is to move horizontally toward its target, since this is optimal in the induced M (when the other agent is absent). Case 1: Suppose that for ∈, A and B follow greedy policies with probability 1 −, and take random actions (p(a) = 1/4) with probability. Then the probability of a symmetric optimal trajectory is P (cooperate) = 2 2 ((1 −) + /4) 8. For = 0.5, P (cooperate) ≈ 0.01. Case 2: If agents execute uniform random exploration, then P (cooperate) = 3.05e-5 0.01. We investigated whether policies trained with few agent vehicles (N = 2) on an empty road can generalize to situations with heavy SUMO-controlled traffic. We also tested on initial and goal lane configurations (C3 and C4) which occur with low probability when training with configurations C1 and C2. Table 1 shows the sum of agents' reward, averaged over 100 test episodes, on these configurations that require cooperation with each other and with minimally-interactive SUMOcontrolled vehicles for success. CM3's higher performance than IAC and COMA in training is reflected by better generalization performance on these test configurations. There is almost negligible decrase in performance from train Figure 5d to test, giving evidence to our hypothesis that centralized training with few agents is feasible even for deployment in situations with many agents, for certain applications where local interactions are dominant. F ABSOLUTE RUNTIME CM3's higher sample efficiency does not come at greater computational cost, as all methods' runtimes are within an order of magnitude of one another. Test times have no significant difference as all neural networks were similar. The full Markov game for each experimental domain, along with the single-agent MDP induced from the Markov game, are defined in this section. In all domains, each agent's observation in the Markov game consists of two components, o self and o others. CM3 leverages this decomposition for faster training, while IAC, COMA and QMIX do not. This domain is adapted from the multi-agent particle environment in. Movable agents and static landmarks are represented as circular objects located in a 2D unbounded world with real-valued position and velocity. Agents experience contact forces during collisions. A simple model of inertia and friction is involved. State. The global state vector is the concatenation of all agents' absolute position (x, y) ∈ R 2 and velocity (v x, v y) ∈ R 2. Observation. Each agent's observation of itself, o self, is its own absolute position and velocity. Each agent's observation of others, o others, is the concatenation of the relative positions and velocities of all other agents with respect to itself. Actions. Agents take actions from the discrete set do nothing, up, down, left, right, where the movement actions produce an instantaneous velocity (with inertia effects). Goals and initial state assignment. With probability 0.2, landmarks are given uniform random locations in the set (−1, 1) 2, and agents are assigned initial positions uniformly at random within the set (−1, 1) 2. With probability 0.8, they are predefined as follows (see Figure 3). In "Antipodal", landmarks for agents 1 to 4 have (x, y) coordinates [(0.9,0.9), (-0.9,-0.9), (0.9,-0.9), (-0.9,0.9)], while agents 1 to 4 are placed at [(-0.9,-0.9), (0.9,0.9), (-0.9,0.9), (0.9,-0.9)]. In "Intersection", landmark coordinates are [(0.9,-0.15), (-0.9,0.15), (0.15,0.9), (-0.15,-0.9 Reward. At each time step, each agent's individual reward is the negative distance between its position and the position of its assigned landmark. If a collision occurs between any pair of agents, both agents receive an additional -1 penalty. A collision occurs when two agents' distance is less than the sum of their radius. Termination. Episode terminates when all agents are less than 0.05 distance from assigned landmarks. Induced MDP. This is the N = 1 case of the Markov game, used by Stage 1 of CM3. The single agent only receives o self . In each episode, its initial position and the assigned landmark's initial position are both uniform randomly chosen from (−1, 1) 2. We constructed a straight road of total length 200m and width 12.8m, consisting of four lanes. All lanes have width 3.2m, and vehicles can be aligned along any of four sub-lanes within a lane, with lateral spacing 0.8m. Vehicles are emitted at average speed 30m/s with small deviation. Simulation time resolution was 0.2s per step. SUMO file merge_stage3_dense.rou.xml contains all vehicle parameters, and merge.net.xml defines the complete road architecture. State. The global state vector s is the concatenation of all agents' absolute position (x, y), normalized respectively by the total length and width of the road, and horizontal speed v normalized by 29m/s. Observation. Each agent observation of itself o n self is a vector consisting of: agent speed normalized by 29m/s, normalized number of sub-lanes between agent's current sub-lane and center sub-lane of goal lane, and normalized longitudinal distance to goal position. Each agent's observation of others o n others is a discretized observation tensor of shape centered on the agent, with two channels: binary indicator of vehicle occupancy, and normalized relative speed between agent and other vehicles. Each channel is a matrix with shape, corresponding to visibility of 15m forward and backward (with resolution 2.5m) and four sub-lanes to the left and right. Actions. All agents have the same discrete action space, consisting of five options: no-op (maintain current speed and lane), accelerate (2.5m/s 2), decelerate (−2.5m/s 2), shift one sub-lane to the left, shift one sub-lane to the right. Each agent's action a n is represented as a one-hot vector of length 5. Goals and initial state assignment. Each goal vector g n is a one-hot vector of length 4, indicating the goal lane at which agent n should arrive once it crosses position x=190m. With probability 0.2, agents are assigned goals uniformly at random, and agents are assigned initial lanes uniformly at random at position x=0. With probability 0.8, agent 1's goal is lane 2 and agent 2's goal is lane 1, while agent 1 is initialized at lane 1 and agent 2 is initialized at lane 2 (see Figure 4). Departure times were drawn from a normal distribution with mean 0s and standard deviation 0.5s for each agent. Reward. The reward R(s t, a t, g n) for agent n with goal g n is given according to the conditions: -1 for a collision; -10 for time-out (exceed 33 simulation steps during an episode); 10(1 − ∆) for reaching the end of the road and having a normalized sub-lane difference of ∆ from the center of the goal lane; and -0.1 if current speed exceeds 35.7m/s. Termination. Episode terminates when 33 simulation steps have elapsed or all agents have x >190m. Induced MDP. This is the N = 1 case of the Markov game defined above, used by Stage 1 of CM3. The single agent receives only o self. For each episode, agent initial and goal lanes are assigned uniformly at random from the available lanes. This domain is adapted from the Checkers environment in. It is a gridworld with 5 rows and 13 columns (Figure 2). Agents cannot move to the two highest and lowest rows and the two highest and lowest columns, which are placed for agents' finite observation grid to be well-defined. Agents cannot be in the same grid location. Red and yellow collectible reward are placed in a checkered pattern in the middle 3x8 region, and they disappear when any agent moves to their location. State. The global state s consists of two components. The first is s T, a tensor of shape, where the two "channels" in the last dimension represents the presence/absence of red and yellow rewards as 1-hot matrices. The second is s V, the concatenation of all agents' (x, y) location (integer-valued) and the number of red and yellow each agent has collected so far. Observation. Each agent's obsevation of others, o n others, is the concatenation of all other agents' normalized coordinates (normalized by total size of grid). An agent's observation of itself, o n self, consists of two components. First, o n self,V is a vector concatenation of agent n's normalized coordinate and the number of red and yellow it has collected so far. Second, o n self,T is a tensor of shape, centered on its current location in the grid. The tensor has three "channels", where the first two represent presence/absence of red and yellow rewards as 1-hot matrices, and the last channel indicates the invalid locations as a 1-hot matrix. The agent's own grid location is a valid location, while other agents' locations are invalid. Actions. Agents choose from a discrete set of actions do-nothing, up, down, left, right. Movement actions transport the agent one grid cell in the chosen direction. Goals. Agent A's goal is to collect all red rewards without touching yellow. Agent B's goal is to collect all yellow without touching red. The goal is represented as a 1-hot vector of length 2. Reward. Agent A gets +1 for red, -0.5 for yellow. Agent B gets -0.5 for red, +1 for yellow. For all experiment domains, ReLU nonlinearity was used for all neural network layers unless otherwise specified. All layers are fully-connected feedforward layers, unless otherwise specified. All experiment domains have a discrete action space (with |A| = 5 actions), and action probabilities were computed by lower-bounding softmax outputs of all policy networks by P (a n = i) = (1 −)softmax(i) + /|A|, where is a decaying exploration parameter. To keep neural network architectures as similar as possible among all algorithms, our neural networks for COMA differ from those of in that we do not use recurrent networks, and we do not feed previous actions into the Q function. For the Q network in all implementations of COMA, the value of each output node i is interpreted as the action-value Q(s, a −n, a n = i, g) for agent n taking action i and all other agents taking action a −n. Also for COMA, agent n's label vector (one-hot indicator vector) and observation o self were used as input to COMA's global Q function, to differentiate between evaluations of the Q-function for different agents. These were choices in COMA. COMA uses the same policy network as Stage 2 of CM3. The global Q function of COMA computes Q(s, (a n, a −n)) for each agent n as follows. Input is the concatenation of state s, all other agents' 1-hot actions a −n, agent n's goal g n, all other agent goals g −n, agent label n, and agent n's observation o n self. This is passed through two layers of 128 units each, then connected to a linear output layer with 5 units. The Q 1 function in Stage 1 feeds the concatenation of state s, goal g, and 1-hot action a to one layer with 256 units, which is connected to the special layer h The Q 1 function in Stage 1 is defined as: state tensor s T is fed to a convolutional layer with 4 filters of size 3x5 and stride 1x1 and flattened. o n self,T is given to a convolution layer with 6 filters of size 3x3 and stride 1x1 and flattened. Both are concatenated with s n (agent n part of the s V vector), goal g n, action a n and o COMA. COMA uses the same policy network as Stage 2 of CM3. The global Q(s, (a n, a −n)) function of COMA is defined as follows for each agent n. Tensor part of global state s T is given to a convolutional layer with 4 filters of size 3x5 and stride 1x1. Tensor part of agent n's observation o n self,T is given to a convolutional layer with 6 filters of size 3x3 and stride 1x1. Outputs of both convolutional layers are flattened, then concatenated with s V, all other agents' actions a −n, agent n's goal g n, other agents' goals g −n, agent n's label vector, and agent n's vector observation o n self,V. The concatenation is passed through two layers with 256 units each, then to a linear output layer with 5 units. QMIX. Individual value functions are defined as: o n self,T is passed through the same convolutional layer as above, connected to hidden layer with 32 units, then concatenated with o n self,V, a n t−1, and g n. This is connected to layer h 2 with 64 units. o n others is connected to a layer with 64 units then connectd to h 2. h 2 is fully-connected to an output layer. The mixing network feeds s T into the same convolutional network as above and follows the exact architecture of with embedding dimension 128. We used the Adam optimizer in Tensorflow with hyperparameters in Tables 3 to 5. div is used to compute the exploration decrement step:= (start − end)/ div. 5e2 1e3 1e3 1e4 2e4 1e4 1e4 Replay buffer 1e4 1e4 1e4 1e4 1e4 1e4 1e4 Minibatch size 128 128 128 128 128 128 128 Steps per Stage 1 training curves for all three experimental domains are shown in Figure 6. | A modular method for fully cooperative multi-goal multi-agent reinforcement learning, based on curriculum learning for efficient exploration and credit assignment for action-goal interactions. | 1,234 | scitldr |
We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments. The Adversarial Imitation Learning (AIL) class of algorithms learns a policy that robustly imitates an expert's actions via a collection of expert demonstrations, an adversarial discriminator and a reinforcement learning method. For example, the Generative Adversarial Imitation Learning (GAIL) algorithm BID19 ) uses a discriminator reward and a policy gradient algorithm to imitate an expert RL policy. Similarly, the Adversarial Inverse Reinforcement Learning (AIRL) algorithm BID10 ) makes use of a modified GAIL discriminator to recover a reward function to perform Inverse Reinforcement Learning (IRL) BID1. Additionally, this subsequent dense reward is robust to changes in dynamics or environment properties. Importantly, AIL algorithms such as GAIL and AIRL, obtain higher performance than supervised Behavioral Cloning (BC) when using a small number of expert demonstrations; experimentally suggesting that AIL algorithms alleviate some of the distributional drift BID35 issues associated with BC. However, these AIL methods suffer from two important issues that will be addressed by this work: 1) a large number of policy interactions with the learning environment is required for policy convergence and 2) although in principle these methods can learn rewards for absorbing states, the original implementations suffer from improper handling of the environment terminal states. This introduces implicit rewards priors which can either improve or degrade policy performance. Figure 1: The Discriminator-Actor-Critic imitation learning framework combined with a method to explicitly learn rewards for the absorbing states. While GAIL requires as little as 200 expert frame transitions (from 4 expert trajectories) to learn a robust reward function on most MuJoCo BID41 tasks, the number of policy frame transitions sampled from the environment can be as high as 25 million in order to reach convergence. If PPO ) is used in place of TRPO BID37, the sample complexity can be improved (for example, as in Figure 3, 25 million steps reduces to approximately 10 million steps), however it is still intractable for many robotics or real-world applications. In this work we address this issue by incorporating an off-policy RL algorithm (TD3 BID11) and an off-policy discriminator to dramatically decrease the sample complexity by orders of magnitude. In this work, we also illustrate how specific design choices for AIL algorithms and MDPs used in practice, have a large impact on agent performance for environments with absorbing states. For instance, as we will demonstrate, if the implementation assigns zero rewards for absorbing states, a strictly positive reward function can prevent the agent from solving tasks with a minimal number of steps, while a strictly negative reward function is unable to emulate a survival bonus. Therefore, one must have some knowledge of the true environment reward and incorporate such priors to choose a suitable reward function for successful application of GAIL and AIRL. We will discuss these issues formally, and present a simple -yet effective -solution that drastically improves policy performance for environments with absorbing states; we explicitly handle absorbing state transitions by learning the reward associated with these states. First we propose a new algorithm, which we call Discriminator-Actor-Critic (DAC) (Figure 1), that is compatible with the GAIL and AIRL frameworks by extending them with an off-policy discriminator and an off-policy actor-critic reinforcement learning algorithm. Then we propose a general approach to handling absorbing states in inverse reinforcement learning and reward learning methods. We experimentally demonstrate that this removes the bias due to incorrect absorbing state handling in both GAIL-like and AIRL-like variants of our DAC algorithm. In our experiments, we demonstrate that DAC achieves state-of-the-art AIL performance for a number of difficult imitation learning tasks, where proper handling of terminal states is crucial for matching expert performance in the presence of absorbing states. More specifically, in this work we:• Identify, and propose solutions for the problem of handling terminal states of policy rollouts in standard RL benchmarks in the context of AIL algorithms.• Accelerate learning from demonstrations by providing an off-policy variant for AIL algorithms, which significantly reduces the number of agent-environment interactions.• Illustrate the robustness of DAC to noisy, multi-modal and constrained expert demonstrations, by performing experiments with human demonstrations on non-trivial robotic tasks. Imitation learning has been broadly studied under the twin umbrellas of Behavioral Cloning (BC) BID3 BID35 and Inverse Reinforcement Learning (IRL) BID27. To recover the underlying policy, IRL performs an intermediate step of estimating the reward function followed by RL on this function BID1 BID32. Operating in the Maximum Entropy IRL formulation BID47, BID9 introduce an iterativesampling based estimator for the partition function, deriving an algorithm for recovering non-linear reward functions in high-dimensional state and action spaces. BID8 and BID10 further extend this by exploring the theoretical and practical considerations of an adversarial IRL framework, and draw connections between IRL and cost learning in GANs BID12.In practical scenarios, we are often interested in recovering the expert's policy, rather than the reward function. Following BID40, and by treating imitation learning as an occupancy matching problem, BID19 proposed a Generative Adversarial Imitation Learning (GAIL) framework for learning a policy from demonstrations, which bypasses the need to recover the expert's reward function. More recent work extends the framework by improving on stability and robustness BID21 and making connections to model-based imitation learning BID4. These approaches generally use on-policy algorithms for policy optimization, trading off sample efficiency for training stability. Learning complex behaviors from sparse reward signals poses a significant challenge in reinforcement learning. In this context, expert demonstrations or template trajectories have been successfully used BID31 for initializing RL policies. There has been a growing interest in combining extrinsic sparse reward signals with imitation learning for guided exploration BID46 BID20 BID23 BID44. Off policy learning from demonstration has been previously studied under the umbrella of accelerating reinforcement learning by structured exploration BID26 An implicit assumption of these approaches is access to demonstrations and reward from the environment; our approach requires access only to expert demonstrations. Biases associated with specific MDP benchmarks also arise in the standard RL setup. In particular, BID29 and BID43 discuss handling of time limits in RL specifically with MDPs where time limits make the problems non-Markovian and might affect optimality of the training policy and value function estimation. The problem with the biases associated with episode terminations also prove to be severe for AIL algorithms because for specific RL benchmarks the absorbing states might not even be adequately taken into consideration. We discuss this in more detail in Section 4.1.Our work is most related to AIL algorithms BID19 BID10 BID42. In contrast to BID19 which assumes (state-action-state') transition tuples, BID42 has weaker assumptions, by relying only on observations and removing the dependency on actions. The contributions in this work are complementary (and compatible) to BID42.Concurrent to our work, several other papers introduced algorithms for sample efficient imitation learning. BID5 introduced Sample-efficient Adversarial Mimic (SAM) algorithm that combines Deep Deterministic Policy Gradients (DDPG) from Lillicrap et al. FORMULA3 with GAIL. While BID33 and BID36 proposed imitation learning algorithms based on off-policy reinforcement learning that does not require to learn rewards. We consider problems that satisfy the definition of a Markov Decision Process (MDP), formalized by the tuple: (S, A, p(s), p(s |s, a), r(s, a, s), γ). Here S, A represent the state and action spaces respectively, p(s) is the initial state distribution, p(s |s, a) defines environment dynamics represented as a conditional state distribution, r(s, a, s) is reward function and γ the return discount factor. In continuing tasks, where environment interactions are unbounded in sequence length, the returns for a trajectory τ = {(s t, a t)} ∞ t=0, are defined as DISPLAYFORM0 In order to use the same notation for tasks with absorbing states, whose finite length episodes end when reaching a terminal state, we can define a set of absorbing states s a BID39 that an agent enters after the end of episode, has zero reward and transitions to itself for all agent actions: s a ∼ p(·|s T, a T), r(s a, ·, ·) = 0 and s a ∼ p(·|s a, ·) (see Figure 2). With this above absorbing state notation, returns can be defined simply as DISPLAYFORM1 In reinforcement learning, the goal is to learn a policy that maximizes expected returns. DISPLAYFORM2 We depict an episode of MDP with an absorbing state. The absorbing state transitions to itself with zero reward. In many imitation learning and IRL algorithms a common assumption is to assign zero reward value, often implicitly, to absorbing states. Moreover, standard benchmark MDPs, such as the tasks in OpenAI Gym, omit absorbing states and corresponding transitions from rollouts. Under this omission and a de-facto reward of 0 to absorbing states, the standard AIL algorithms do not have access to absorbing states in the buffer, which biases the reward learning process. We propose a modification that enables our DAC algorithm to assign a learned, potentially non-zero, reward for absorbing states. We discuss this in detail in Section 4.1, and demonstrate empirically in Section 5.2 that it is extremely important to properly handle the absorbing states for algorithms where rewards are learned. Considering the implications of adequate handling of terminal states, it is worth mentioning that practical implementations of MDP benchmarks terminate episodes after a specific number of steps. We refer to this as time dependent termination, which makes the tasks non-Markovian, since the returns are now time-dependent as observed in BID29, BID43. These works propose to fix this problem by using a time-dependent value function, or by bootstrapping after the terminal state instead of masking the returns, which can be achieved using an algorithm that incorporates value function learning BID11. Because our solution is derived for infinite horizon problems, we do not treat states that occur after time-dependent termination as absorbing states and assume that after explicitly adding absorbing states and transitions all tasks have infinite horizon (for example, see Figure 2). For this reason, in our implementation we use the latter approach and perform bootstrapping for the terminal states (for elaborate discussion on time limits in MDPs, we refer the reader to BID29). In order to learn a robust reward function we use the GAIL framework BID19. Inspired by maximum entropy IRL BID47 and Generative Adversarial Networks (GANs) BID12, GAIL trains a binary classifier, D(s, a), referred to as the discriminator, to distinguish between transitions sampled from an expert and those generated by the trained policy. In standard GAN frameworks, a generator gradient is calculated by backprop through the learned discriminator. However, in GAIL the policy is instead provided a reward for confusing the discriminator, which is then maximized via some on-policy RL optimization scheme (e.g. TRPO BID37): DISPLAYFORM0 where H(π) is an entropy regularization term and π E is a policy provided by an expert. The rewards learned by GAIL might not correspond to a true reward BID10 but can be used to match the expert occupancy measure, which is defined as ρ π E (s, a) = ∞ t=0 γ t p(s t = s, a t = a|π E). BID19 draw analogies between distribution matching using GANs and occupancy matching with GAIL. They demonstrate that by maximizing the above reward, the algorithm matches occupancy measures of the expert and trained policies with some regularization term defined by the choice of GAN loss function. In principle, GAIL can be incorporated with any on-policy RL algorithm. However, in this work we adapt it for off-policy training (discussed in Section 4.3). As can be seen from Equation 1, the algorithm requires state-action pairs to be sampled from the learned policy. In Section 4.3 we will discuss what modifications are necessary to adapt the algorithm to off-policy training. In this section we first elaborate on specific instances of biased rewards in AIL algorithms due to insufficient handling of terminal states. Following that in Section 4.2, we present an approach for unbiasing rewards for existing AIL algorithms. Then, we derive an off-policy formulation of AIL in Section 4.3, which we name Discriminator-Actor-Critic (DAC). A high level pictorial representation of this algorithm is shown in Figure 1, and it is formally summarized in Appendix A. In the following section, we present examples of bias present in implementations of different AIL algorithms as they assign zero rewards to absorbing states:• Absorbing states in MDPs: In the GAIL framework (and follow-up methods, such as GM-MIL BID21, OptionGAN BID16, AIRL and the widely used implementation of GAIL from OpenAI Baselines), for some benchmarks such as MuJoCo locomotion tasks from OpenAI Gym, a reward function r(s, a) assigns rewards to intermediate states depending on properties of a task. At the same time, policies executed on these MDPs generate rollouts that ignore absorbing states. Subsequently, the algorithms do not have access to these absorbing states in the buffer, cannot learn proper rewards, and therefore do not perform bootstrapping after terminal states; thus, 0 reward is implicitly assigned for absorbing states.• For certain environments, a survival bonus in the form of per-step positive reward is added to the rewards received by the agent. This encourages agents to survive longer in the environment to collect more rewards. We observe that a commonly used form of the reward function: r(s, a) = − log(1 − D(s, a)) has worked well for environments that require a survival bonus. Under the implicit assumption of zero rewards for absorbing states in the MDP implementation, this strictly positive estimator cannot recover the true reward function for environments where an agent is required to solve the task as quickly as possible. Using this form of the reward function will lead to sub-optimal solutions. The agent is now incentivized to move in loops or take small actions (in continuous action spaces) that keep it close to the states in the expert's trajectories. The agent keeps collecting positive rewards without actually attempting to solve the task demonstrated by the expert. • Another reward formulation is r(s, a) = log(D (s, a) ). This is often used for tasks with a per step penalty, when a part of a reward function consists of a negative constant assigned unconditionally of states and actions. However, this variant assigns only negative rewards and cannot learn a survival bonus. Such strong priors might lead to good even with no expert trajectories (as shown in FIG1).From an end-user's perspective, it is undesirable to have to craft a different reward function for every new task. In the next section, we propose a method to handle absorbing states of the standard benchmark MDPs in such a way that AIL algorithms are able to recover different reward functions without adjusting the form of reward estimator. In order to resolve the issues described in Section 4.1, we suggest explicitly learning rewards for absorbing states for expert demonstrations and trajectories produced by a policy. Thus, the returns for final states of an episode that consists of T transitions are defined now R T = r(s T, a T) + ∞ t=T +1 γ t−T r(s a, ·) with a learned reward r(s a, ·) instead of just R T = r(s T, a T) that is often used due to issues described in Section 4.2. This formulation allows the algorithms to correctly estimate returns for the final transitions and optimize the policy accordingly. In order to enable the AIL algorithms to learn the rewards for absorbing states and RL algorithms to take into account learned rewards, we suggest to update rollouts sampled from MDPs in the following way. After terminating an episode, we explicitly add a transition from the terminal state of the episode to an absorbing state (s T, s a) and a transition from an absorbing state to itself (s a, s a).Thus, when sample from the replay buffer AIL algorithms will be able to see absorbing states there were previous hidden, while RL algorithms will be able to properly estimate values for terminal states using transitions (s T, s a) and (s a, s a) using the following recursions: DISPLAYFORM0 We implemented these absorbing states by adding an extra indicator dimension that indicates whether the state is absorbing or not, for absorbing states we set the indicator dimension to one and all other dimensions to zero. The GAIL discriminator can distinguish whether reaching an absorbing state is a desirable behavior from the expert's perspective and assign the rewards accordingly. As previously mentioned, GAIL requires a significant number of interactions with a learning environment in order to imitate an expert policy. To address the sample inefficiency of GAIL, we use an off-policy RL algorithm and perform off-policy training of the GAIL discriminator performed in the following way: instead of sampling trajectories from a policy directly, we sample transitions from a replay buffer R collected while performing off-policy training: DISPLAYFORM0 Equation 2 tries to match the occupancy measures between the expert and the distribution induced by the replay buffer R, which can be seen as a mixture of all policy distributions that appeared during training, instead of the latest trained policy π. In order to recover the original on-policy expectation, one needs to use importance sampling: DISPLAYFORM1 However, it can be challenging to properly estimate these densities and the discriminator updates might have large variance. We found that the algorithm works well in practice with the importance weight omitted. We use the GAIL discriminator in order to define rewards for training a policy using TD3; we update per-step rewards every time when we pull transitions from the replay buffer using the latest discriminator. The TD3 algorithm provides a good balance between sample complexity and simplicity of implementation and so is a good candidate for practical applications. Additionally, depending on the distribution of expert demonstrations and properties of the task, off-policy RL algorithms can effectively handle multi-modal action distributions; for example, this can be achieved for the Soft Actor Critic algorithm BID15 We implemented the DAC algorithm described in Section 4.3 using TensorFlow Eager BID0 and we evaluated it on popular benchmarks for continuous control simulated in MuJoCo BID41. We also define a new set of robotic continuous control tasks (described in detail below) simulated in PyBullet BID6, and a Virtual Reality (VR) system for capturing human examples in this environment; human examples constitute a particularly challenging demonstration source due to their noisy, multi-modal and potentially sub-optimal nature, and we define multi-task environments as a challenging setup for adversarial imitation learning. For the critic and policy networks we used the same architecture as in BID11: a 2 layer MLP with ReLU activations and 400 and 300 hidden units correspondingly. We also add gradient clipping BID30 to the actor network with clipping value of 40. For the Figure 3: Comparisons of algorithms using 4 expert demonstrations. y-axis corresponds to normalized reward (0 corresponds to a random policy, while 1 corresponds to an expert policy).discriminator we used the same architecture as in BID19: a 2 layer MLP with 100 hidden units and tanh activations. We trained all networks with the Adam optimizer and decay learning rate by starting with initial learning rate of 10 −3 and decaying it by 0.5 every 10 5 training steps for the actor network. In order to make the algorithm more stable, especially in the off-policy regime when the discriminator can easily over-fit to training data, we use regularization in the form of gradient penalties BID13 for the discriminator. Originally, this was introduced as an alternative to weight clipping for Wasserstein GANs ), but later it was shown that it helps to make JS-based GANs more stable as well BID25.We replicate the experimental setup of BID19: expert trajectories are sub-sampled by retaining every 20 time steps starting with a random offset (and fixed stride). It is worth mentioning that, as in BID19, this procedure is done in order to make the imitation learning task harder. With full trajectories, behavioral cloning provides competitive to GAIL.Following BID17 and BID11, we perform evaluation using 10 different random seeds. For each seed, we compute average episode reward using 10 episodes and running the policy without random noise. As in BID19 we plot reward normalized in such a way that zero corresponds to a random reward while one corresponds to expert rewards. We compute mean over all seeds and visualize half standard deviations. In order to produce the same evaluation for GAIL we used the original implementation 3 of the algorithm. Evaluation of the DAC algorithm on a suite of MuJoCo tasks are shown in Figure 3, as are the GAIL (TRPO) and BC basline . In the top-left plot, we show DAC is an order of magnitude more sample efficent than then TRPO and PPO based GAIL baselines. In the other plots, we show that by using a significantly smaller number of environment steps (orders of magnitude fewer), our DAC algorithm reaches comparable expected reward as the GAIL baseline. Furthermore, DAC outperforms the GAIL baseline on all environments within a 1 million step threshold. We obtained slightly worse for Walker2d. However, as mentioned earlier, GAIL uses a reward function that already has some biases encoded in it that aids training on this specific environment. A comprehensive suit of can be found in Appendix B, Figure 7. As discussed in Section 4.1, the reward function variants used with GAIL can have implicit biases when used without handling absorbing states. FIG1 demonstrates how bias affects on an. Surprisingly, when using a fixed and untrained GAIL discriminator that outputs 0.5 for every state-action pair, we were able to reach episode rewards of around 1000 on the Hopper environment, corresponding to approximately one third of the expert performance. Without any reward learning, and using no expert demonstrations, the agent can learn a policy that outperforms behavioral cloning FIG1. Therefore, the choice of a specific reward function might already provide strong prior knowledge that helps the RL algorithm to move towards recovering the expert policy, irrespective of the quality of the learned reward. Additionally, we evaluated our method on two environments with per-step penalty (see FIG2). These environment are simulated in PyBullet and consist of a Kuka IIWA arm and 3 blocks on a virtual table. A rendering of the environment can be found in Appendix C, Figure 8. Using a Cartesian displacement action for the gripper end-effector and a compact observation-space (consisting of each block's 6DOF pose and the Kuka's end-effector pose), the agent must either a) reach one of the 3 blocks in the shortest number of frames possible (the target block is provided to the policy as a one-hot vector), which we call Kuka-Reach, or b) push one block along the table so that it is adjacent to another block, which we call Kuka-PushNext. For evaluation, we define a sparse reward indicating successful task completion (within some threshold). For these imitation learning experiments, we use human demonstrations collected with a VR setup, where the participant wears a VR headset and controls in real-time the gripper end-effector using a 6DOF controller. Using the reward defined as r(s, a) = −log(1 − D(s, a)) and without absorbing state handling, the agent completely fails to recover the expert policy given 600 expert trajectories without subsampling (as shown in FIG1). In contrast, our DAC algorithm quickly learns to imitate the expert, despite using noisy and potentially sub-optimal human demonstrations. As discussed, alternative reward functions do not have this positive bias but still require proper handling of the absorbing states as well in order to avoid early termination due to incorrectly assigned per-frame penalty. Figure 6 illustrates for AIRL with and without learning rewards for absorbing states. For these experiments we use the discriminator structure from BID10 in combination with the TD3 algorithm. In this work we address several important issues associated with the popular GAIL framework. In particular, we address 1) sample inefficiency with respect to policy transitions in the environment and 2) we demonstrate a number of reward biases that can either implicitly impose prior knowledge about the true reward, or alternatively, prevent the policy from imitating the optimal expert. To Figure 6: Effect of learning absorbing state rewards when using an AIRL discriminator within the DAC Framework in OpenAI Gym environments.address reward bias, we propose a simple mechanism whereby the rewards for absorbing states are also learned, which negates the need to hand-craft a discriminator reward function for the properties of the task at hand. In order to improve sample efficiency, we perform off-policy training of the discriminator and use an off-policy RL algorithm. We show that our algorithm reaches state-of-theart performance for an imitation learning algorithm on several standard RL benchmarks, and is able to recover the expert policy given a significantly smaller number of samples than in recent GAIL work. Algorithm 1 Discriminative-Actor-Critic Adversarial Imitation Learning Algorithm Input: expert replay buffer R E procedure WRAPFORABSORBINGSTATES(τ) if s T is a terminal state not caused by time limits then τ ← τ \ {(s T, a T, ·, s T)} ∪ {(s T, a T, ·, s a)} τ ← τ ∪ {(s a, ·, ·, s a)} end if return τ end procedure Initialize replay buffer R ← ∅ for τ = {(s t, a t, ·, s t)} T t=1 ∈ R E do τ ← WrapForAbsorbingState(τ)Wrap expert rollouts with absorbing states end for for n = 1,..., do Sample τ = {(s t, a t, ·, s t)} In the Kuka-Reach tasks, the agent must bring the robot gripper to 1 of the 3 blocks (where the state contains a 1-hot encoding of the task) and for the Kuka-PushNext tasks, the agent must use the robot gripper to push one block next to another. | We address sample inefficiency and reward bias in adversarial imitation learning algorithms such as GAIL and AIRL. | 1,235 | scitldr |
Capsule Networks have shown encouraging on defacto benchmark computer vision datasets such as MNIST, CIFAR and smallNORB. Although, they are yet to be tested on tasks where the entities detected inherently have more complex internal representations and there are very few instances per class to learn from and where point-wise classification is not suitable. Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points. In doing so we introduce Siamese Capsule Networks, a new variant that can be used for pairwise learning tasks. The model is trained using contrastive loss with l2-normalized capsule encoded pose features. We find that Siamese Capsule Networks perform well against strong baselines on both pairwise learning datasets, yielding best in the few-shot learning setting where image pairs in the test set contain unseen subjects. Convolutional Neural networks (CNNs) have been a mainstay model for a wide variety of tasks in computer vision. CNNs are effective at detecting local features in the receptive field, although the spatial relationship between features is lost when crude routing operations are performed to achieve translation invariance, as is the case with max and average pooling. Essentially, pooling in viewpoint invariance so that small perturbations in the input do not effect the output. This leads to a significant loss of information about the internal properties of present entities (e.g location, orientation, shape and pose) in an image and relationships between them. The issue is usually combated by having large amounts of annotated data from a wide variety of viewpoints, albeit redundant and less efficient in many cases. As noted by hinton1985shape, from a psychology perspective of human shape perception, pooling does not account for the coordinate frames imposed on objects when performing mental rotation to identify handedness BID20; BID16 BID10. Hence, the scalar output activities from local kernel regions that summarize sets of local inputs are not sufficient for preserving reference frames that are used in human perception, since viewpoint information is discarded. Spatial Transformer Networks (STN) BID11 have acknowledged the issue by using dynamic spatial transformations on feature mappings to enhance the geometric invariance of the model, although this approach addresses changes in viewpoint by learning to remove rotational and scale variance, as opposed to viewpoint variance being reflected in the model activations. Instead of addressing translation invariance using pooling operations, BID6 have worked on achieving translation equivariance. The recently proposed Capsule Networks BID21; BID5 have shown encouraging to address these challenges. Thus far, Capsule Networks have only been tested on datasets that have a relatively sufficient number of instances per class to learn from and utilized on tasks in the standard classification setup. This paper extends Capsule Networks to the pairwise learning setting to learn relationships between whole entity encodings, while also demonstrating their ability to learn from little data that can perform few-shot learning where instances from new classes arise during testing (i.e zero-shot prediction). The Siamese Capsule Network is trained using a contrastive loss with 2 -normalized encoded features and demonstrated on two face verification tasks. BID6 first introduced the idea of using whole vectors to represent internal properties (referred to as instantiation parameters that include pose) of an entity with an associated activation probability where each capsule represents a single instance of an entity within in an image. This differs from the single scalar outputs in conventional neural networks where pooling is used as a crude routing operation over filters. Pooling performs sub-sampling so that neurons are invariant to viewpoint change, instead capsules look to preserve the information to achieve equivariance, akin to perceptual systems. Hence, pooling is replaced with a dynamic routing scheme to send lowerlevel capsule (e.g nose, mouth, ears etc.) outputs as input to parent capsule (e.g face) that represent part-whole relationships to achieve translation equivariance and untangles the coordinate frame of an entity through linear transformations. The idea has its roots in computer graphics where images are rendered given an internal hierarchical representation, for this reason the brain is hypothesized to solve an inverse graphics problem where given an image the cortex deconstructs it to its latent hierarchical properties. The original paper by BID21 describes a dynamic routing scheme that represent these internal representations as vectors given a group of designated neurons called capsules, which consist of a pose vector u ∈ R d and activation α ∈. The architecture consists of two convolutional layers that are used as the initial input representations for the first capsule layer that are then routed to a final class capsule layer. The initial convolutional layers allow learned knowledge from local feature representations to be reused and replicated in other parts of the receptive field. The capsule inputs are determined using a Iterative Dynamic Routing scheme. A transformation W ij is made to output vector u i of capsule C L i. The length of the vector u i represents the probability that this lower-level capsule detected a given object and the direction corresponds to the state of the object (e.g orientation, position or relationship to upper capsule). The output vector u i is transformed into a prediction vectorû j|i, whereû j|i = W ij u i. Then,û j|i is weighted by a coupling coefficient c ij to obtain s j = i c ijûj|i, where coupling coefficients for each capsule j c ij = 1 and c ij is got by log prior probabilities b ij from a sigmoid function, followed by the softmax, c ij = e bij / k e b ik. Ifû L j|i has high scalar magnitude when multiplied by u L+1 j then the coupling coefficient c ij is increased and the remaining potential parent capsules coupling coefficients are decreased. Routing By Agreement is then performed using coincidence filtering to find tight clusters of nearby predictions. The entities output vector length is represented as the probability of an entity being present by using the nonlinear normalization shown in Equation 1 where vote v j is the output from total input s j, which is then used to compute the agreement a ij = v jûj|i that is added to the log prior b ij. The capsule is assigned a high log-likelihood if densely connected clusters of predictions are found from a subset of s. The centroid of the dense cluster is output as the entities generalized pose. This coincidence filtering step can also be achieved by traditional outlier detection methods such as Random sample consensus (RANSAC) BID3 BID3 classical for finding subsets of the feature space with high agreement. Although, the motivation for using the vector normalization of the instantiation parameters is to force the network to preserve orientation. Lastly, a reconstruction loss on the images was used for regularization which constrains th capsules to learn properties that can better encode the entities. In this paper, we do not use such regularization scheme by autoencoding pairs of input images, instead we use a variant of dropout. BID5 recently describe matrix capsules that perform routing by agreement using the expectation maximization (EM) algorithm, motivated by computer graphics where pose matrices are used to define rotations and translations of objects to account for viewpoint changes. Each parent capsule is considered a Gaussian and the pose matrix of each child capsule are considered data samples of the Gaussian. A given layer L contains a set of capsules C L such that ∀C ∃ {M, α} ∈ C L where pose matrix M ∈ R n×n (n = 4) and activation α ∈ are the outputs. A vote is made V ij = M i W ij for the pose matrix of C L+1 j where W ij ∈ R n×n is a learned viewpoint invariant transformation matrix from capsule where the cost h j is the negative log-probability density weighted by the assignment probabilities r ij, −β u is the negative log probability density per pose matrix computed to describe C L+1 j. If C L+1 j is activated −β a is the cost for describing (µ j, σ 2 j) from lower-level pose data samples along with r ij and λ is the inverse temperature so as the assignment probability becomes higher the slope of the sigmoid curve becomes steeper (represents the presence of an entity instead of the nonlinear vector normalization seen in Equation 1). The network uses 1 standard convolutional layer, a primary capsule layer, 2 intermediate capsule convolutional layer, followed by the final class capsule layer. The matrix capsule network significantly outperformed CNNs on the SmallNORB dataset. LaLonde & Bagci FORMULA7 introduce SegCaps which uses a locally connected dynamic routing scheme to reduce the number of parameters while using deconvolutional capsules to compensate for the loss of global information, showing best performance for segmenting pathological lungs from low dose CT scans. The model obtained a 39% and 95% reduction in parameters over baseline architectures while outperforming both. Bahadori FORMULA7 introduced Spectral Capsule Networks demonstrated on medical diagnosis. The method shows faster convergence over the EM algorithm used with pose vectors. Spatial coincidence filters align extracted features on a 1-d linear subspace. The architecture consists of a 1d convolution followed by 3 residual layers with dilation. Residual blocks R are used as nonlinear transformations for the pose and activation of the first primary capsule instead of the linear transformation that accounts for rotations in CV, since deformations made in healthcare imaging are not fully understood. The weighted votes are obtained as s j,i = α i R j (u i) ∀i where S j is a matrix of concatenated votes that are then decomposed using SVD, where the first singular value dimensions 1 is used to capture most of the variance between votes, thus the activation a j activation is computed as σ η(s DISPLAYFORM0 k is the ratio of all variance explained for all right singular vectors in V, b is optimized and η is decreased during training. The model is trained by maximizing the log-likelihood showing better performance than the spread loss used with matrix capsules and mitigates the problem of capsules becoming dormant. BID27 formalize the capsule routing strategy as an optimization of a clustering loss and a KL regularization term between the coupling coefficient distribution and its past states. The proposed objective function follows as min C,S {L(C, S):= − i j c ij o j|i, s j + α i j c ij log c ij } where o j|i = T ij µ i /||T ij || F and ||T ij || F is the Frobenious norm of T ij. This routing scheme shows significant benefit over the original routing scheme by BID21 as the number of routing iterations increase. Evidently, there has been a surge of interest within the research community. In contrast, the novelty presented in this paper is the pairwise learning capsule network scheme that proposes a different loss function, a change in architecture that compares images, aligns entities across images and describes a method for measuring similarity between final layer capsules such that inter-class variations are maximized and intra-class variations are minimized. Before describing these points in detail, we briefly describe the current state of the art work (SoTA) in face verification that have utilized Siamese Networks. Siamese Networks (SNs) are neural networks that learn relationships between encoded representations of instance pairs that lie on low dimensional manifold, where a chosen distance function d ω is used to find the similarity in output space. Below we briefly describe state of the art convolutional SN's that have been used for face verification and face recognition. BID24 presented a joint identification-verification approach for learning face verification with a contrastive loss and face recognition using cross-entropy loss. To balance loss signals for both identification and verification, they investigate the effects of varying weights controlled by λ on the intra-personal and inter-personal variations, where λ = 0 leaves only the face recognition loss and λ → ∞ leaves the face verification loss. Optimal are found when λ = 0.05 intra personal variation is maximized while both class are distinguished. BID28 propose a center loss function to improve discriminative feature learning in face recognition. The center loss function proposed aims to improve the discriminability between feature representations by minimizing the intra-class variation while keeping features from different classes separable. The center loss is given as DISPLAYFORM0 where z = W T j x i + b j. The c yi is the centroid of feature representations pertaining to the i th class. This penalizes the distance between class centers and minimizes the intra-class variation while the softmax keeps the inter-class features separable. The centroids are computed during stochastic gradient descent as full batch updates would not be feasible for large networks. BID15 proposed Sphereface, a hypersphere embedding that uses an angular softmax loss that constrains disrimination on a hypersphere manifold, motivated by the prior that faces lie on a manifold. The model achieves 99.22 % on the LFW dataset, and competitive on Youtube Face (YTF) and MegaFace. BID22 proposed a triplet similarity embedding for face verification using a triple loss arg min W = α,p,n∈T max(0, α + α T W T W (n − p)) where for T triplet sets lies an anchor class α, positive class p and negative class n, a projection matrix W, (performed PCA to obtain W 0) is minimized with the constraint that W BID7 use deep metric learning for face verification with loss arg min DISPLAYFORM1 DISPLAYFORM2 F | where g(z) = log(1 + e βz)/β, β controls the slope steepness of the logistic function, ||A|| F is the frobenius norm of A and λ is a regularization parameter. Hence, the loss function is made up of a logistic loss and regularization on parameters θ = [W, b]. Best are obtained using a combination of SIFT descriptors, dense SIFT and local binary patterns (LBP), obtaining 90.68% (+/-1.41) accuracy on the LFW dataset. BID18 used an 2 -constraint on the softmax loss for face verification so that the encoded face features lie on the ambit of a hypersphere, showing good improvements in performance. This work too uses an 2 -constraint on capsule encoded face embeddings. FaceNet BID23 too uses a triplet network that combines the Inception network BID25 and a 8-layer convolutional model BID29 which learns to align face patches during training to perform face verification, recognition and clustering. The method trains the network on triplets of increasing difficulty using a negative example mining technique. Similarly, we consider a Siamese Inception Network for the tasks as one of a few comparisons to SCNs. The most relevant and notable use of Siamese Networks for face verification is the DeepFace network, introduced by BID26. The performance obtained was on par with human level performance on the Faces in the Wild (LFW) dataset and significantly outperformed previous methods. However, it is worth noting this model is trained on a large dataset from Facebook (SFC), therefore the model can be considered to be performing transfer learning before evaluation. The model also carries out some manual steps for detecting, aligning and cropping faces from the images. For detecting and aligning the face a 3D model is used. The images are normalized to avoid any differences in illumination values, before creating a 3D model which is created by first identifying 6 fiducial points in the image using a Support Vector Regressor from a LBP histogram image descriptor. Once the faces are cropped based on these points, a further 67 fiducial point are identified for 3D mesh model, followed by a piecewise affine transformation for each section of the image. The cropped image is then passed to 3 CNN layers with an initial max-pooling layer followed two fully-connected layers. Similar to Capsule Networks, the authors refrain from using max pooling at each layer due to information loss. In contrast to this work, the only preprocessing steps for the proposed SCNs consist of pixel normalization and a reszing of the image. The above work all achieve comparable state of the art for face verification using either a single CNN or a combination of various CNNs, some of which are pretrained on large related datasets. In contrast, this work looks to use a smaller Capsule Network that is more efficient, requires little preprocessing steps (i.e only a resizing of the image and normalization of input features, no aligning, cropping etc.) and can learn from relatively less data. The Capsule Network for face verification is intended to identify enocded part-whole relationships of facial features and their pose that in turn leads to an improved similarity measure by aligning capsule features across paired images. The architecture consists of a 5-hidden layer (includes 2 capsule layers) network with tied weights (since both inputs are from the same domain). The 1 st layer is a convolutional filter with a stride of 3 and 256 channels with kernels κ 1 i ∈ R 9×9 ∀i over the image pairs x 1, x 2 ∈ R 100×100, ing in 20, 992 parameters. The 2 nd layer is the primary capsule layer that takes κ and outputs κ ∈ R 31×31 matrix for 32 capsules, leading to 5.309 × 10 6 parameters (663, 552 weights and 32 biases for each of 8 capsules). The 3 rd layer is the face capsule layer, representing the routing of various properties of facial features, consisting of 5.90 × 10 6 parameters. This layer is then passed to a single fully connected layer by concatenating DISPLAYFORM0 as input, while the sigmoid functions control the dropout rate for each capsule during training. The nonlinear vector normalization shown in Equation 1 is replaced with a tanh function tanh which we found in initial testing to produce better . Euclidean distance, Manhattan distance and cosine similarity are considered as measures between the capsule image encodings. The aforementioned SCN architecture describes the setup for the AT&T dataset. For the LFW dataset, 6 routing iterations are used and 4 for AT&T.Capsule Encoded Representations To encode paired images x 1, x 2 into vector pairs h 1, h 2 the pose vector of each capsule is vectorized and passed as input to a fully connected layer containing 20 activation units. Hence, for each input there is a lower 20-dimensional representation of 32 capsule pose vectors ing in 512 input features. To ensure all capsules stay active the dropout probability rate is learned for each capsule. The sigmoid function learns the dropout rate of the final capsule layer using Concrete Dropout BID4, which builds on prior work Kingma et al. FORMULA7; BID17 by using a continuous relaxation that approximates the discrete Bernoulli distribution used for dropout, referred to as a concrete distribution. Equation 2 shows the objective function for updating the concrete distribution. For a given capsule probability p c in the last capsule layer, the sigmoid computes the relaxationz on the Bernoulli variable z, where u is drawn uniformly between where t denotes the temperature values (t = 0.1 in our experiments) which forces probabilities at the extremum when small. The pathwise derivative estimator is used to find a continuous estimation of the dropout mask. The weight λ is used to prevent the activity vector lengths from deteriorating early in training if a class capsule is absent. The overall loss is then simply the sum of the capsule losses c L c. A spread loss BID5 has also been used to maximize the inter-class distance between the target class and the remaining classes for classifying on the smallNORB dataset. This is given as DISPLAYFORM1 DISPLAYFORM2 where the margin m is increased linearly during training to ensure lower-level capsule stay active throughout training. This work instead uses a contrastive margin loss BID2 where the aforementioned capsule encoding similarity function d ω outputs a predicted similarity score. The contrastive loss L c ensures similar vectorized pose encodings are drawn together and dissimilar poses repulse. Equation 3 shows a a pair of images that are passed to the SCN model where DISPLAYFORM3 2 computes the Euclidean distance between encodings and m is the margin. When using Manhattan distance DISPLAYFORM4 A double margin loss that has been used in prior work by BID14 is also considered to affect matching pairs such that to account for positive pairs that can also have high variance in the distance measure. It is worth noting this double margin is similar to the aforementioned margin loss used on class capsules, without the use of λ. Equation 4 shows the double-margin contrastive loss where positive margin m p and negative margin m n are used to find better separation between matching and non-matching pairs. This loss is only used for LFW, given the limited number of instances in AT&T we find the amount of overlap between pairs to be less severe in experimentation. DISPLAYFORM5 The original reconstruction loss DISPLAYFORM6 i ) 2 used as regularization is not used in the pairwise learning setting, instead we rely on the dropout for regularization with exception of the SCN model that uses concrete dropout on the final layer. Optimization Convergence can often be relatively slow for face verification tasks, where few informative batch updates (e.g a sample with significantly different pose for a given class) get large updates but soon after the effect is diminished through gradient exponential averaging (originally introduced to prevent α → 0). Motivated by recent findings that improve adaptive learning rates we use AMSGrad BID19. AMSGrad improves over ADAM in some cases by replacing the exponential average of squared gradients with a maximum that mitigates the issue by keeping long-term memory of past gradients. Thus, AMSGrad does not increase or decrease the learning rate based on gradient changes, avoiding divergent or vanishing step sizes over time. Equation 5 presents the update rule, where diagonal of gradient g t is given as DISPLAYFORM7 A. AT&T dataset The AT&T face recognition and verification dataset consists of 40 different subjects with only 10 gray-pixel images per subject in a controlled setting. This smaller dataset allows us to test how SCNs perform with little data. For testing, we hold out 5 subjects so that we are testing on unseen subjects, as opposed to training on a given viewpoint of a subject and testing on another viewpoint of the same subject. Hence, zero-shot pairwise prediction is performed during testing. The LFW consists of 13,000 colored photographed faces from the web. This dataset is significantly more complex not only because there 1680 subjects, with some subjects only consisting of two images, but also because of varied amount of aging, pose, gender, lighting and other such natural characteristics. Each image is 250 × 250, in this work the image is resized to 100×100 and normalized. From the original LFW dataset there has been 2 different versions of the dataset that align the images using funneling BID8 and deep funneling BID9 Baselines SCNs are compared against well-established architectures for image recognition and verification tasks, namely AlexNet, ResNet-34 and InceptionV3 with 6 inception layers instead of the original network that uses 8 layers which are used many of the aforementioned papers in Section 3. Table 1 shows best test obtained when using contrastive loss with Euclidean distance between encodings (i.e Mahalanobis distance) for both AT &T and LFW over 100 epochs. The former uses m = 2.0 and the latter uses m = 0.2, while for the double margin contrastive loss m n = 0.2 matching margin and m p = 0.5 negative matching margin is selected. These settings were chosen during 5-fold cross validation, grid searching over possible margin settings. SCN outperforms baselines on the AT &T dataset after training for 100 epochs. We find that because AT&T contains far fewer instances an adapted dropout rate leads to a slight increase in contrastive loss. Additionally, adding a reconstruction loss with λ r = 1e −4 for both paired images led to a decrease in performance when compared to using dropout with a rate p = 0.2 on all layers except the final layer that encodes the pose vectors. We find for the LFW dataset that the SCN and AlexNet have obtained the best while SCN has 25% less parameters. Additionally, the use of a double margin in better for the standard SCN but a slight drop in performance when used with concrete dropout on the final layer (i.e SDropCapNet). Figure 2 illustrates the contrastive loss during training 2 -normalized features for each model tested with various distance measures on AT&T and LFW. We find that SCN yields faster convergence on AT&T, particularly when using Manhattan distance. However for Euclidean distance, we observe a loss variance reduction during training and the best overall performance. Through experiments we find that batch normalized convolutional layers improves performance of the SCN. In batch normalization, provides a unit Gaussian batch that is shifted by γ (k) and scaled with DISPLAYFORM0. This allows the network to learn whether the input range should be more or less diffuse. Batch normalization on the initial convolutional layers reduced variance in loss during training on both the AT &T and LF W datasets. LFW test show that the SCN model takes longer to converge particularly in the early stages of training, in comparison to AlexNet. Figure 3 shows the probability density of the positive pair predictions for each model for all distances between encodings with contrastive loss for the LFW dataset. We find the variance of predictions is lower in comparison to the remaining models, showing a higher precision in the predictions, particularly for Manhattan distance. Additionally, varying distances for these matching images were close Finally, the SCN model has between 104-116 % less parameters than Alexnet, 24-27 % Resnet-34 and 127-135% less than the best standard baseline for both datasets. However, even considering tied weights between models in the SCN, Capsule Networks are primarily limited in speed even with a reduction in parameters due to the routing iterations that are necessary during training. This paper has introduced the Siamese Capsule Network, a novel architecture that extends Capsule Networks to the pairwise learning setting with a feature 2 -normalized contrastive loss that maximizes inter-class variance and minimizes intra-class variance. The indicate Capsule Networks perform better at learning from only few examples and converge faster when a contrastive loss is used that takes face embeddings in the form of encoded capsule pose vectors. We find Siamese Capsule Networks to perform particularly well on the AT&T dataset in the few-shot learning setting, which is tested on unseen classes (i.e subjects) during testing, while competitive against baselines for the larger Labeled Faces In The Wild dataset. | A variant of capsule networks that can be used for pairwise learning tasks. Results shows that Siamese Capsule Networks work well in the few shot learning setting. | 1,236 | scitldr |
Animals excel at adapting their intentions, attention, and actions to the environment, making them remarkably efficient at interacting with a rich, unpredictable and ever-changing external world, a property that intelligent machines currently lack. Such adaptation property strongly relies on cellular neuromodulation, the biological mechanism that dynamically controls neuron intrinsic properties and response to external stimuli in a context dependent manner. In this paper, we take inspiration from cellular neuromodulation to construct a new deep neural network architecture that is specifically designed to learn adaptive behaviours. The network adaptation capabilities are tested on navigation benchmarks in a meta-learning context and compared with state-of-the-art approaches. Results show that neuromodulation is capable of adapting an agent to different tasks and that neuromodulation-based approaches provide a promising way of improving adaptation of artificial systems. We are now seeing the emergence of highly efficient algorithms that are capable of learning and solving complex problems. However, it remains difficult to learn models that generalise or adapt themselves efficiently to new, unforeseen problems based on past experiences. This calls for the development of novel architectures specifically designed to enhance adaptation capabilities of current deep neural networks (DNN). In biological nervous systems, cellular neuromodulation provides the ability to continuously tune neurons input/output behavior to shape their response to external inputs in different contexts, generally in response to an external signal carried by biochemicals called neuromodulators. Neuromodulation regulates many critical nervous system properties that cannot be achieved solely through synaptic plasticity, which represents the ability for neurons to tune their connectivity during learning. Neuromodulation has been shown to be critical to the adaptive control of continuous behaviours, such as in motor control among others. We propose a new neural architecture specifically designed for DNNs and inspired from cellular neuromodulation which we call NMN, standing for "Neuro-Modulated Network". At its core, the NMN architecture is made of two neural networks: a main network and a neuromodulatory network. The main network is a feed-forward DNN composed of neurons equipped with a parametric activation function specifically designed for neuromodulation. It allows the main network to be adapted to new unforeseen problems. The neuromodulatory network, on the other hand, controls the neuronal dynamics of the main network via the parameters of its activation functions. Both networks have different inputs: whereas the main network is in charge of processing samples, the neuromodulatory network processes feedback and contextual data. In, the authors take inspiration from Hebbian plasticity to build networks with plastic weights, allowing them to tune their weights dynamically. In the same authors extand their work by learning a neuromodulatory signal that dictates which and when connections should be plastic. Our architecture is also related to hypernetworks, in which a network's weights are computed through another network. Other recent works focused on learning fixed activation functions. The NMN architecture revolves around the neuromodulatory interaction between the neuromodulatory and main networks. We mimick biological cellular neuromodulation in a DNN by assigning the neuromodulatory network the task to tune the slope and bias of the main network activation functions. Let σ(x): R → R denote any activation function and its neuromodulatory capable version σ NMN (x, z; w s, w b) = σ z T (xw s + w b) where z ∈ R k is a neuromodulatory signal and w s, w b ∈ R k are two parameter vectors of the activation function, respectively governing a scale factor and an offset. In this work, we propose to replace all the main network's neurons activation function with their neuromodulatory capable counterparts. The neuromodulatory signal z, which size k is a free parameter, is shared for all these neurons and computed by the neuromodulatory network as z = f (c). The function f can be any DNN taking as input the vector c representing some contextual inputs (e.g. c may have a dynamic size in which case f would be parameterized as a recurrent neural network (RNN) or a conditional neural process ). The complete NMN architecture and the change made to the activation functions are depicted on Figure 1. Notably, the number of newly introduced parameters scales linearly with the number of neurons in the main network whereas it would scale linearly with the number of connections between neurons if the neuromodulatory network was affecting connection weights, as seen for instance in the context of hypernetworks. Therefore this approach can be extanded to very large networks. Setting. We evaluate the NMN architecture on meta-RL which is motivated by the analogy with biology. In contrast with classical RL, which is formalized as the interaction between an agent and an environment defined as a markov decision process (MDP), the meta-RL setting resides in the sub-division of an MDP as a distribution D over simpler MDPs. Let t denote the discrete time, x t the state of the MDP at time t, a t the action taken at time t and r t the reward obtained at the subsequent time-step. At the beginning of a new episode i, a new element is drawn from D to define an MDP, referred to by M, with which the meta-RL agent interacts for T ∈ N time-steps afterwards. The only information that the agent collects on M is through observing the states crossed and the rewards obtained at each time-step. We denote by h t = [x 0, a 0, r 0, x 1, . . ., a t−1, r t−1] the history of the interaction with M up to time step t. As in, the goal of the meta-learning agent is to maximise the expected value of the sum of rewards it can obtain over all episodes and steps. In, the authors tackle this meta-RL framework by using an advantage actor-critic (A2C) algorithm, in which the actor and the critic are RNNs, taking [h t, x t] as input. In this work, we propose to compare the NMN architecture to standard RNN by modelling both the actor and the critic with NMN. To this end, we define the feedback and contextual inputs c (i.e. the neuromodulatory network inputs) as h t while the main network's input is defined as x t. Note that h t grows as the agent interacts with M, motivating the usage of a RNN as neuromodulatory network. To be as close as possible to the neuronal model proposed by, the main network is a fully-connected neural network built using saturated rectified linear units activation functions σ(x) = min(1, max(−1, x)), except for the final layer (also neuromodulated), for which σ(x) = x. We build our models such that both standard RNN and NMN architectures have the same number of recurrent layers/units and a relative difference between the numbers of parameters that is smaller than 2%. Both models are trained using an A2C algorithm with generalized advantage estimation and proximal policy updates. Finally, no parameter is shared between the actor and the critic. Benchmarks. We carried out our experiments on three custom benchmarks: a simple toy problem and two navigation problems with sparse rewards. These benchmarks are built to evaluate our architecture in environments with continuous action spaces. For conciseness and clarity, we only provide a mathematical definition of the first benchmark (later needed for discussing ). The two other benchmarks are briefly textually depicted and further details are available on Github 2. We define the first benchmark (made of a 1D state space and action space) through a random variable α, and receives a reward r t which is equal to 10 if |a t − p t | < 1 and −|a t − p t | otherwise. In case of positive reward, p t+1 is re-sampled uniformly in its domain else p t+1 = p t. The second benchmark consists in navigating towards a target in a 2D space with noised movements. At each time-step, the agent observes its relative position to the target and outputs the direction of a move vector m t. A perturbation vector w t is then sampled uniformly in a cone, whose main direction is dictated by the current task in D. Finally the agent is moved following m t + w t. If the agent reaches the target, it receives a high reward and is moved to a position sampled uniformly in the 2D space. The third benchmark also consists in navigating in a 2D space but containing two targets. At each time-step the agent observes its relative position to the two targets and is moved along a direction given by its action. In this benchmark, D is only composed of two tasks, corresponding to the attribution of a positive reward to one of the two targets and a negative reward to the other. As for benchmark 2, once the agent reaches a target, it receives the corresponding reward and is moved to a position sampled uniformly in the 2D space. Results. From a learning perspective, a comparison of the sum of rewards obtained per episode by NMNs and RNNs on the three benchmarks is shown on Figure 2. The show that in average, NMNs learn faster (with respect to the number of episodes) and converge towards better policies than RNNs (i.e., higher rewards for the last episodes). Most noteable, NMNs show very stable , with small variances over different random seeds, as opposed to RNNs. From an adaptation perspective, Figure 3 shows the temporal evolution of the neuromodulatory signal z (part A) and of the rewards (part B) obtained with respect to α for 1000 episodes played on benchmark 1. For small values of t the agent has little information on the current task, leading to a non-optimal behavior (as it can be seen from the low rewards). Most interestingly, the signal z for the first time-steps exhibits little dependence on α, highlighting the agent's uncertainty on the current task. Said otherwise, for small t the agent learnt to play a (nearly) task-independent strategy. As time passes, the agent gathers further information about the current task and approaches a near-optimal policy. This reflects in convergence of z with a clear dependency on α and also in wider-spread values of z. For large value of t, z holding constant between time-steps shows that the neuromodulatory signal is almost state-independent and serves only for adaptation. Finally, we note that the value of z in each of its dimensions varies continuously with α, meaning that for two similar tasks, the signal will converge towards similar values. In this work, we use a high level view of a nervous system mechanism called cellular neuromodulation to improve artificial neural networks adaptive capabilities. The obtained on three meta-RL benchmark problems showed that this new architecture was able to perform better than classical RNN. The work reported in this paper could be extended along several lines. First, it would be interesting to explore other types of machine-learning problems where adaptation is required. Second, research work could also be carried out to further improve the NMN introduced here. For instance, one could introduce new types of parametric activation functions which are not linear, or spiking neurons. It would also be of interest to look at sharing activation function parameters per layer. Furthermore, analysing more in-depth the neuromodulatory signal (and its impact on activation functions) with respect to different more complex tasks could also be worth-while. Finally, let us emphasize that even if the obtained by our NMN are good and also rather robust with respect to a large choice of parameters, further research is certainly still needed to better characterise their performances. | This paper introduces neuromodulation in artificial neural networks. | 1,237 | scitldr |
Convolution operator is the core of convolutional neural networks (CNNs) and occupies the most computation cost. To make CNNs more efficient, many methods have been proposed to either design lightweight networks or compress models. Although some efficient network structures have been proposed, such as MobileNet or ShuffleNet, we find that there still exists redundant information between convolution kernels. To address this issue, we propose a novel dynamic convolution method named \textbf{DyNet} in this paper, which can adaptively generate convolution kernels based on image contents. To demonstrate the effectiveness, we apply DyNet on multiple state-of-the-art CNNs. The experiment show that DyNet can reduce the computation cost remarkably, while maintaining the performance nearly unchanged. Specifically, for ShuffleNetV2 (1.0), MobileNetV2 (1.0), ResNet18 and ResNet50, DyNet reduces 40.0%, 56.7%, 68.2% and 72.4% FLOPs respectively while the Top-1 accuracy on ImageNet only changes by +1.0%, -0.27%, -0.6% and -0.08%. Meanwhile, DyNet further accelerates the inference speed of MobileNetV2 (1.0), ResNet18 and ResNet50 by 1.87x,1.32x and 1.48x on CPU platform respectively. To verify the scalability, we also apply DyNet on segmentation task, the show that DyNet can reduces 69.3% FLOPs while maintaining the Mean IoU on segmentation task. Convolutional neural networks (CNNs) have achieved state-of-the-art performance in many computer vision tasks , and the neural architectures of CNNs are evolving over the years (; ; ; ; ; a; b). However, modern high-performance CNNs often require a lot of computation resources to execute large amount of convolution kernel operations. Aside from the accuracy, to make CNNs applicable on mobile devices, building lightweight and efficient deep models has attracting much more attention recently (; ;). These methods can be roughly categorized into two types: efficient network design and model compression. Representative methods for the former category are MobileNet and ShuffleNet (;, which use depth-wise separable convolution and channel-level shuffle techniques to reduce computation cost. On the other hand, model compression based methods tend to obtain a smaller network by compressing a larger network via pruning, factorization or mimic (; a; ; ;). Although some handcrafted efficient network structures have been designed, we observe that the significant correlations still exist among convolutional kernels, and introduce large amount of redundant calculations. Moreover, these small networks are hard to compress. For example, compress MobileNetV2 to 124M, but the accuracy drops by 5.4% on ImageNet. We theoretically analyze above observation, and find that this phenomenon is caused by the nature of static convolution, where correlated kernels are cooperated to extract noise-irrelevant features. Thus it is hard to compress the fixed convolution kernels without information loss. We also find that if we linearly fuse several convolution kernels to generate one dynamic kernel based on the input, we can obtain the noise-irrelevant features without the cooperation of multiple kernels, and further reduce the computation cost of convolution layer remarkably. Based on above observations and analysis, in this paper, we propose a novel dynamic convolution method named DyNet. The overall framework of DyNet is shown in Figure 1, which consists of a coefficient prediction module and a dynamic generation module. The coefficient prediction module is trainable and designed to predict the coefficients of fixed convolution kernels. Then the dynamic generation module further generates a dynamic kernel based on the predicted coefficients. Our proposed dynamic convolution method is simple to implement, and can be used as a drop-in plugin for any convolution layer to reduce computation cost. We evaluate the proposed DyNet on state-of-the-art networks such as MobileNetV2, ShuffleNetV2 and ResNets. Experiment show that DyNet reduces 37.0% FLOPs of ShuffleNetV2 (1.0) while further improve the Top-1 accuracy on ImageNet by 1.0%. For MobileNetV2 (1.0), ResNet18 and ResNet50, DyNet reduces 54.7%, 67.2% and 71.3% FLOPs respectively, the Top-1 accuracy on ImageNet changes by −0.27%, −0.6% and −0.08%. Meanwhile, DyNet further accelerates the inference speed of MobileNetV2 (1.0), ResNet18 and ResNet50 by 1.87×,1.32×and 1.48× on CPU platform respectively. We review related works from three aspects: efficient convolution neural network design, model compression and dynamic convolutional kernels. In many computer vision tasks , model design plays a key role. The increasing demands of high quality networks on mobile/embedding devices have driven the study on efficient network design . For example, GoogleNet increases the depth of networks with lower complexity compared to simply stacking convolution layers; SqueezeNet deploys a bottleneck approach to design a very small network; Xception , MobileNet and MobileNetV2 use depth-wise separable convolution to reduce computation and model size. ShuffleNet and ShuffleNetV2 shuffle channels to reduce computation of 1 × 1 convolution kernel and improve accuracy. Despite the progress made by these efforts, we find that there still exists redundancy between convolution kernels and cause redundant computation. Another trend to obtaining small network is model compression. Factorization based methods try to speed up convolution operation by using tensor decomposition to approximate original convolution operation. Knowledge distillation based methods (; ;) learn a small network to mimic a larger teacher network. Pruning based methods (a; b; ;) try to reduce computation by pruning the redundant connections or convolution channels. Compared with those methods, DyNet is more effective especially when the target network is already efficient enough. For example, in , they get a smaller model of 124M FLOPs by pruning the MobileNetV2, however it drops the accuracy by 5.4% on ImageNet compared with the model with 291M FLOPs. While in DyNet, we can reduce the FLOPs of MobileNetV2 (1.0) from 298M to 129M with the accuracy drops only 0.27%. Generating dynamic convolution kernels appears in both computer vision and natural language processing (NLP) tasks. In computer vision domain, Klein et al. and Brabandere et al. directly generate convolution kernels via a linear layer based on the feature maps of previous layers. Because convolution kernels has a large amount of parameters, the linear layer will be inefficient on the hardware. Our proposed method solves this problem via merely predicting the coefficients for linearly combining static kernels and achieve real speed up for CNN on hardware. The idea of linearly combining static kernels using predicted coefficients has been proposed by Yang et al., but they focus on using more parameters to make models more expressive while we focus on reducing redundant calculations in convolution. We make theoretical analysis and conduct correlation experiment to prove that correlations among convolutional kernels can be reduced via dynamically fusing several kernels. In NLP domain, some works; ) incorporate context information to generate input-aware convolution filters which can be changed according to input sentences with various lengths. These methods also directly generate convolution kernels via a linear layer, etc. Because the size of CNN in NLP is smaller and the dimension of convolution kernel is one, the inefficiency issue for the linear layer is alleviated. Moreover, Wu et al. alleviate this issue utilizing the depthwise convolution and the strategy of sharing weight across layers. These methods are designed to improve the adaptivity and flexibility of language modeling, while our method aims to cut down the redundant computation cost. In this section, we first describe the motivation of DyNet. Then we explain the proposed dynamic convolution in detail. Finally, we illustrate the DyNet based architectures of our proposed Dy-mobile, Dy-shuffle, Dy-ResNet18, Dy-ResNet50. As illustrated in previous works (a; b; ;), convolutional kernels are naturally correlated in deep models. For some of the well known networks, we plot the distribution of Pearson product-moment correlation coefficient between feature maps in Figure 2. Most existing works try to reduce correlations by compressing. However, efficient and small networks like MobileNets are harder to prune despite the correlation is still significant. We think these correlations are vital for maintaining the performance because they are cooperated to obtain noiseirrelevant features. We take face recognition as an example, where the pose or the illumination is not supposed to change the classification . Therefore, the feature maps will gradually become noise-irrelevant when they go deeper. Based on the theoretical analysis in appendix A, we find that if we dynamically fuse several kernels, we can get noise-irrelevant feature without the cooperation of redundant kernels. In this paper, we propose dynamic convolution method, which learns the coefficients to fuse multiple kernels into a dynamic one based on image contents. We give more in depth analysis about our motivation in appendix A. The goal of dynamic convolution is to learn a group of kernel coefficients, which fuse multiple fixed kernels to a dynamic one. We demonstrate the overall framework of dynamic convolution in Figure 1. We first utilize a trainable coefficient prediction module to predict coefficients. Then we further propose a dynamic generation module to fuse fixed kernels to a dynamic one. We will illustrate the coefficient prediction module and dynamic generation module in detail in the following of this section. Coefficient prediction × × × × Figure 3: The coefficient prediction module. Coefficient prediction module Coefficient prediction module is proposed to predict coefficients based on image contents. As shown in Figure 3, the coefficient prediction module can be composed by a global average pooling layer and a fully connected layer with Sigmoid as activation function. Global average pooling layer aggregates the input feature maps into a 1 × 1 × C in vector, which serves as a feature extraction layer. Then the fully connected layer further maps the feature into a 1 × 1 × C vector, which are the coefficients for fixed convolution kernels of several dynamic convolution layers. Dynamic generation module For a dynamic convolution layer with weight [C out × g t, C in, k, k], it corresponds with C out × g t fixed kernels and C out dynamic kernels, the shape of each kernel is [C in, k, k]. g t denotes the group size, it is a hyperparameter. We denote the fixed kernels as w i t, the dynamic kernels as w t, the coefficients as η i t, where t = 0,..., C out, i = 0,..., g t. After the coefficients are obtained, we generate dynamic kernels as follows: Training algorithm For the training of the proposed dynamic convolution, it is not suitable to use batch based training scheme. It is because the convolution kernel is different for different input images in the same mini-batch. Therefore, we fuse feature maps based on the coefficients rather than kernels during training. They are mathematically equivalent as shown in Eq. 2: We equip the MobileNetV2, ShuffleNetV2 and ResNets with our proposed dynamic convolution, and propose Dy-mobile, Dy-shuffle, Dy-ResNet18 and Dy-ResNet50 respectively. The building blocks of these 4 network are shown in Figure 4. Based on above dynamic convolution, each dynamic kernel can get noise-irrelevant feature without the cooperation of other kernels. Therefore we can reduce the channels for each layer of those base models and remain the performance. We set the hyper-parameter g t as 6 for all of them, and we give details of these dynamic CNNs below. Dy-mobile In our proposed Dy-mobile, we replace the original MobileNetV2 block with our dymobile block, which is shown in Figure 4 (a). The input of coefficient prediction module is the input of block, it produces the coefficients for all three dynamic convolution layers. Moreover, we further make two adjustments: • We do not expand the channels in the middle layer like MobileNetV2. If we denote the output channels of the block as C out, then the channels of all the three convolution layers will be C out. • Since the depth-wise convolution is efficient, we set groups = Cout 6 for the dynamic depthwise convolution. We will enlarge C out to make it becomes the multiple of 6 if needed. After the aforementioned adjustments, the first dynamic convolution layer reduce the FLOPs from 6C 2 HW to C 2 HW. The second dynamic convolution layer keep the FLOPs as 6CHW × 3 2 unchanged because we reduce the output channels by 6x while setting the groups of convolution 6x smaller, too. For the third dynamic convolution layer, we reduce the FLOPs from 6C 2 HW to C 2 HW as well. The ratio of FLOPs for the original block and our dy-mobile block is: Dy-shuffle In the original ShuffleNet V2, channel split operation will split feature maps to rightbranch and left-branch, the right branch will go through one pointwise convolution, one depthwise convolution and one pointwise convolution sequentially. We replace conventional convolution with dynamic convolution in the right branch as shown in Figure 4 (b). We feed the input of right branch into coefficient prediction module to produce the coefficients. In our dy-shuffle block, we split channels into left-branch and right-branch with ratio 3: 1, thus we reduce the 75% computation cost for two dynamic pointwise convolutuon. Similar with dy-mobile, we adjust the parameter "groups" in dynamic depthwise convolution to keep the FLOPs unchanged. In Dy-ResNet18 and DyResNet50, we simple reduce half of the output channels for dynamic convolution layers of each residual block. Because the input channels of each block is large compared with dy-mobile and dy-shuffle, we use two linear layer as shown in Figure 4 (c) and Figure 4 (d) to reduce the amount of parameters. If the input channel is C in, the output channels of the first linear layer will be Cin 4 for Dy-ResNet18/50. For the training of the proposed dynamic neural networks. Each image has data augmentation of randomly cropping and flipping, and is optimized with SGD strategy with cosine learning rate decay. We set batch size, initial learning rate, weight decay and momentum as 2048, 0.8, 5e-5 and 0.9 respectively. We also use the label smoothing with rate 0.1. We evaluate the accuracy on the test images with center crop. We evaluate DyNet on ImageNet , which contains 1.28 million training images and 50K validation images collected from 1000 different classes. We train the proposed networks on the training set and report the top-1 error on the validation set. To demonstrate the effectiveness, we compare the proposed dynamic convolution with state-of-the-art networks under mobile setting, including MobileNetV1 , MobileNetV2 , ShuffleNet, ShuffleNet V2 , Xception , DenseNet , IGCV2 and IGCV3. This shows that even though the proposed network significantly reduces the convolution computation cost, the generated dynamic kernel can still capture sufficient information from image contents. The also indicate that the proposed dynamic convolution is a powerful plugin, which can be implemented on convolution layers to reduce computation cost while maintaining the accuracy. Furthermore, we conduct detailed experiments on MobileNetV2. We replace the conventional convolution with the proposed dynamic one and get Dy-MobileNetV2. The accuracy of classification for models with different number of channels are shown in Figure 5. It is observed that DyMobileNetV2 consistently outperforms MobileNetV2 but the ascendancy is weaken with the increase of number of channels. The correlation distribution of fixed kernel and the generated dynamic kernel, S, M, W, N denote strong, middle, weak and no correlation respectively. We can observe that compared with conventional fixed kernels, the generated dynamic kernels have small correlation values. Aside from the quantitative analysis, we also demonstrate the redundancy of the generated dynamic kernels compared with conventional fixed kernels in Figure 6. We calculate the correlation between each feature maps output by the second last stage for the original MobileNetV2(1.0) and Dy-MobileNetV2 (1.0). Note that Dy-MobileNetV2 (1.0) is different with Dy-mobile(1.0). Dy-MobileNetV2(1.0) keeps the channels of each layer the same as the original one, while replace the conventional convolution with dynamic convolution. As shown in Figure 6, we can observe that the correlation distribution of dynamic kernels have more values distributed between −0.1 and 0.2 compared with fixed convolution kernel, which indicates that the redundancy between dynamic convolution kernels are much smaller than the fixed convolution kernels. Analysis of speed on the hardware We also analysis the inference speed of DyNet. We carry out experiments on the CPU platform (Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz) with Caffe . We set the size of input as 224 and report the average inference time of 50 iterations. It is reasonable to set mini-batch size as 1, which is consistent with most inference scenarios. The are shown in Table 2. Moreover, the latency of fusing fixed kernels is independent with the input size, thus we expect to achieve bigger acceleration ratio when the input size of networks become larger. We conduct experiments to verify this assumption, the are shown in Figure 7. We can observe that the ratio of reduced latency achieved by DyNet gets bigger as the input size becomes larger. As shown in , a larger input size can make networks perform significantly better, thus DyNet is more effective in this scenario. We also analysis the training speed on the GPU platform. The model is trained with 32 NVIDIA Tesla V100 GPUs and the batch size is 2048. We report the average training time of one iteration in Table 2. It is observed that the training speed of DyNet is slower, it is reasonable because we fuse feature maps rather than kernels according to Eq. 2 in the training stage. To verify the scalability of DyNet on other tasks, we conduct experiments on segmentation. Compared to the method Dilated FCN with ResNet50 as basenet , Dilated FCN with DyResNet50 reduces 69.3% FLOPs while maintaining the MIoU on Cityscapes validation set. The are shown in Table 3. Comparison between dynamic convolution and static convolution We correspondingly design two networks without dynamic convolution. Specifically, we remove the correlation prediction module and use fixed convolution kernel for Dy-mobile (1.0) and Dy-shuffle (1.5), and we keep the channel number the same as the dynamic convolution neural networks. We denote the baseline networks as Fix-mobile(1.0) and Fix-shuffle (1.5) respectively. The are shown in Table 4, compare with baseline networks Fix-mobile (1.0) and Fix-shuffle (1.5), the proposed Dy-mobile (1.0) and Dy-shuffle (1.5) achieve absolute classification improvements by 5.19% and 2.82% respectively. This shows that directly decreasing the channel number to reduce computation cost influences the classification performance a lot. While the proposed dynamic kernel can retain the representation ability as mush as possible. Effectiveness of g t for dynamic kernel The group size g t in Eq. 1 does not change the computation cost of dynamic convolution, but affects the performance of network. Thus we provide ablative study on g t. We set g t as 2,4,6 for dy-mobile(1.0) respectively and the are shown in Table 5. The performance of dy-mobile(1.0) becomes better when g t gets larger. It is reasonable because larger g t means the number of kernels cooperated for obtaining one noise-irrelevant feature becomes larger. When g t = 1, the coefficient prediction module can be regarded as merely learning the attention for different channels, which can improve the performance of networks as well . Therefore we provide ablative study for comparing g t = 1 and g t = 6 on Dy-mobile(1.0) and Dy-ResNet18. The are shown in Table 6. From the table we can see that, setting g t = 1 will reduce the Top-1 accuracy on ImageNet for Dy-mobile(1.0) and Dy-ResNet18 by 2.58% and 2.79% respectively. It proves that the improvement of our proposed dynamic networks does not only come from the attention mechanism. We illustrate our motivation from a convolution with output f (x), i.e., where ⊗ denotes the convolutional operator, x ∈ R n is a vectorized input and w ∈ R n means the filter. Specifically, the i th element of the convolution output f (x) is calculated as: where ·, · provides an inner product and x (i) is the circular shift of x by i elements. We define the index i started from 0. We denote the noises in x (i) as d−1 j=0 α j y j, where α j ∈ R and {y 0, y 1, ..., y d−1} are the base vectors of noise space Ψ. Then the kernels in one convolutional layer can be represented as {w 0, w 1, ..., w c}. The space expanded by {w 0, w 1, ..., w c} is Ω. We can prove if the kernels are trained until Ψ ⊂ Ω, then for each w k / ∈ Ψ, we can get the noise-irrelevant f i (x white) = x, w k by the cooperation of other kernels w 0, w 1,.... Firstly x (i) can be decomposed as: where β ∈ R andx ∈ R n is vertical to w k and y j. For concision we assume the norm of w k and y j is 1. Then, When there is no noise, i.e. α j = 0 for j = 0, 1,..., d − 1, the white output f i (x white) becomes: It is proved in the Appendix A.2 that: where β 0,..., β c is determined by the input image. Eq. 9 is fulfilled by linearly combine convolution output w k, x (i) and w t, x (i) for those β t = 0 in the following layers. Thus if there are N coefficients in Eq. 9 that are not 0, then we need to carry out N times convolution operation to get the noise-irrelevant output of kernel w t, this causes redundant calculation. In Eq. 9, we can observe that the computation cost can be reduced to one convolution operation by linearly fusing those kernels to a dynamic one: w = (a 00 + β k)w k + t =k,βt =0 β t w t f i (x white) = w, x (i). In Eq. 10, the coefficients β 0, β 1,... is determined by α 0, α 1,..., thus they should be generated based on the input of network. This is the motivation of our proposed dynamic convolution. A.2 PROVING OF EQ. 9 We denote g ij (x) as x (i), y j, j = 0, 1,..., d − 1. Then, g ij (x) = x (i), y j = x (i) + βw k +... We simplify this equation as: Because w k / ∈ Ψ, we can denote w k as: where w ⊥ is vertical to y 0,..., y d−1 and γ ⊥ = 0. moreover because |w k | = 1,thus It can be easily proved that: thus, If we denote the elements of the first row of A −1 as a 00, a 01,..., a 0d, then f i (x white) = β = a 00 f i (x) + Because Ψ ⊂ Ω, there exists {β t ∈ R|t = 0, 1, ..., c} that Then, f i (x white) = a 00 w k + t β t w t, x (i) = (a 00 + β k) w k, x (i) + t =k | We propose a dynamic convolution method to significantly accelerate inference time of CNNs while maintaining the accuracy. | 1,238 | scitldr |
Operating deep neural networks on devices with limited resources requires the reduction of their memory footprints and computational requirements. In this paper we introduce a training method, called look-up table quantization (LUT-Q), which learns a dictionary and assigns each weight to one of the dictionary's values. We show that this method is very flexible and that many other techniques can be seen as special cases of LUT-Q. For example, we can constrain the dictionary trained with LUT-Q to generate networks with pruned weight matrices or restrict the dictionary to powers-of-two to avoid the need for multiplications. In order to obtain fully multiplier-less networks, we also introduce a multiplier-less version of batch normalization. Extensive experiments on image recognition and object detection tasks show that LUT-Q consistently achieves better performance than other methods with the same quantization bitwidth. In this paper, we propose a training method for reducing the size and the number of operations of a deep neural network (DNN) that we call look-up table quantization (LUT-Q). As depicted in Fig. 1, LUT-Q trains a network that represents the weights W ∈ R O×I of one layer by a dictionary d ∈ R K and assignments A ∈ [1, . . ., K] O×I such that Q oi = d Aoi, i.e., elements of Q are restricted to the K dictionary values in d. To learn the assignment matrix A and dictionary d, we iteratively update them after each minibatch. Our LUT-Q algorithm, run for each mini-batch, is summarized in TAB1 LUT-Q has the advantage to be very flexible. By simple modifications of the dictionary d or the assignment matrix A, it can implement many weight compression schemes from the literature. For example, we can constrain the assignment matrix and the dictionary in order to generate a network with pruned weight matrices. Alternatively, we can constrain the dictionary to contain only the values {−1, 1} and obtain a Binary Connect Network BID3, or to {−1, 0, 1} ing in a Ternary Weight Network BID12. Furthermore, with LUT-Q we can also achieve Multiplier-less networks by either choosing a dictionary d whose elements d k are of the form d k ∈ {±2 b k} for all k = 1,..., K with b k ∈ Z, or by rounding the output of the k-means algorithm to powers-of-two. In this way we can learn networks whose weights are powers-of-two and can, hence, be implemented without multipliers. The memory used for the parameters is dominated by the weights in affine/convolution layers. Using LUT-Q, instead of storing W, the dictionary d and the assignment matrix A are stored. Hence, for an affine/convolution layer with N parameters, we reduce the memory usage in bits from N B float to just KB float + N ⌈log 2 K⌉, where B float is the number of bits used to store one weight. Furthermore, using LUT-Q we also achieve a reduction in the number of computations: for example, affine layers trained using LUT-Q need to compute just K multiplications at inference time, instead of I multiplications for a standard affine layer with I input nodes. DISPLAYFORM0 For the description of our we use the following naming convention: Quasi multiplier-less networks avoid multiplications in all affine/convolution layers, but they are not completely multiplierless since they contain multiplications in standard batch normalization (BN) layers. For example, the networks described in BID23 are quasi multiplier-less. Fully multiplier-less networks avoid all multiplications at all as they use our multiplier-less BN (see appendix A). Finally, we call all other networks unconstrained. We conducted extensive experiments with LUT-Q and multiplier-less networks on the CIFAR-10 image classification task BID10, on the ImageNet ILSVRC-2012 task BID20 and on the Pascal VOC object detection task BID4. All experiments are carried out with the Sony Neural Network Library 4.For CIFAR-10, we first use the full precision 32-bit ResNet-20 as reference (7.4% error rate). Quasi multiplier-less networks using LUT-Q achieve 7.6% and 8.0% error rate for 4-bit and 2-bit quantization respectively. Fully multiplier-less networks with LUT-Q achieve 8.1% and 9.0% error rates, respectively. LUT-Q can also be used to prune and quantize networks simultaneously. Fig. 2 shows the error rate increase between the baseline full precision ResNet-20 and the pruned and quantized network. Using LUT-Q we can prune the network up to 70% and quantize it to 2-bit without significant loss in accuracy. For Imagenet, we used ResNet-18, ResNet-34 and ResNet-50 BID7 as reference networks. We report their validation error in TAB2. In TAB2, we compare LUT-Q against the published using the INQ approach BID23, which also trains networks with power-of-two weights. We also compare with the baseline reported BID14 which correspond the best from the literature for each weight and quantization configuration. Note that we cannot directly compare the of this appentrice method BID14 itself because they do not quantize the first and last layer of the ResNets. We observe that LUT-Q always achieves better performance than other methods with the same weight and activation bitwidth except for ResNet-18 with 2-bit weight and 8-bit activation quantization. Remarkably, ResNet-50 with 2-bit weights and 8-bit activations achieves 26.9% error rate which is only 1.0% worse than the baseline. The memory footprint for parameters and activations of this network is only 7.4MB compared to 97.5MB for the full precision network. Furthermore, the number of multiplications is reduced by two orders of magnitude and most of them can be replaced by bit-shifts. Finally, we evaluated LUT-Q on the Pascal VOC BID4 object detection task. We use our implementation of YOLOv2 BID18 as baseline. This network has a memory footprint of 200MB and achieves a mean average precision (mAP) of 72% on Pascal VOC. We were able to reduce the total memory footprint by a factor of 20 while maintaining the mAP above 70% by carrying out several modifications: replacing the feature extraction network with traditional residual networks BID7, replacing the convolution layers by factorized convolutions BID4, and finally applying LUT-Q in order to quantize the weights of the network to 8-bit. Using LUT-Q with 4-bit quantization we are able to further reduce the total memory footprint down to just 1.72MB and still achieve a mAP of about 64%. DISPLAYFORM0 Step 2: Compute current cost and gradients DISPLAYFORM1 // Step 3: Update full precision weights (here: SGD) DISPLAYFORM2 end for // Step 4: Update weight tying by M k-means DISPLAYFORM3 end for end for end for Different compression methods were proposed in the past in order to reduce the memory footprint and the computational requirements of DNNs: pruning BID5 BID11, quantization BID1 BID6 BID22, teacherstudent network training BID8 BID14 BID17 BID19 are some examples. In general, we can classify the methods for quantization of the parameters of a neural network into three types:• Soft weight sharing: These methods train the full precision weights such that they form clusters and therefore can be more efficiently quantized BID0 BID1 BID13 BID16 BID22 ].• Fixed quantization: These methods choose a dictionary of values beforehand to which the weights are quantized. Afterwards, they learn the assignments of each weight to the dictionary entries. Examples are Binary Neural Networks BID3, Ternary Weight Networks BID12 and also BID14 BID15.• Trained quantization: These methods learn a dictionary of values to which weights are quantized during training. However, the assignment of each weight to a dictionary entry is fixed BID6.Our LUT-Q approach takes the best of the latter two methods: For each layer, we jointly update both dictionary and weight assignments during training. This approach to compression is similar to Deep Compression BID6 in the way that we learn a dictionary and assign each weight in a layer to one of the dictionary's values using the k-means algorithm, but we update iteratively both assignments and dictionary at each mini-batch iteration. We have presented look-up table quantization, a novel approach for the reduction of size and computations of deep neural networks. After each minibatch update, the quantization values and assignments are updated by a clustering step. We show that the LUT-Q approach can be efficiently used for pruning weight matrices and training multiplier-less networks as well. We also introduce a new form of batch normalization that avoids the need for multiplications during inference. As argued in this paper, if weights are quantized to very low bitwidth, the activations may dominate the memory footprint of the network during inference. Therefore, we perform our experiments with activations quantized uniformly to 8-bit. We believe that a non-uniform activation quantization, where the quantization values are learned parameters, will help quantize activations to lower precision. This is one of the promising directions for continuing this work. Recently, several papers have shown the benefits of training quantized networks using a distillation strategy BID8 BID14. Distillation is compatible with our training approach and we are planning to investigate LUT-Q training together with distillation. From BID9 we know that the traditional batch normalization (BN) at inference time for the oth output is DISPLAYFORM0 where x and y are the input and output vectors to the BN layer, γ and β are parameters learned during training, E [x] and VAR [x] are the running mean and variance of the input samples, and ǫ is a small constant to avoid numerical problems. During inference, γ, β, E [x] and VAR [x] are constant and, therefore, the BN function can be written as DISPLAYFORM1 where we use the scale DISPLAYFORM2 In order to obtain a multiplier-less BN, we require a to be a vector of powers-of-two during inference. This can be achieved by quantizing γ toγ. The quantizedγ is learned with the same idea as for WT: During the forward pass, we use traditional BN with the quantizedγ =â/ VAR[x] + ǫ whereâ is obtained from a by using the power-of-two quantization. Then, in the backward pass, we update the full precision γ. Please note that the computations during training time are not multiplier-less but γ is only learned such that we obtain a multiplier-less BN during inference time. This is different to BID2 which proposed a shift-based batch normalization using a different scheme that avoids all multiplications in the batch normalization operation by rounding multiplicands to powers-of-two in each forward pass. Their focus is on speeding up training by avoiding multiplications during training time, while our novel multiplier-less batch normalization approach avoids multiplications during inference. | In this paper we introduce a training method, called look-up table quantization (LUT-Q), which learns a dictionary and assigns each weight to one of the dictionary's values | 1,239 | scitldr |
An unintended consequence of feature sharing is the model fitting to correlated tasks within the dataset, termed negative transfer. In this paper, we revisit the problem of negative transfer in multitask setting and find that its corrosive effects are applicable to a wide range of linear and non-linear models, including neural networks. We first study the effects of negative transfer in a principled way and show that previously proposed counter-measures are insufficient, particularly for trainable features. We propose an adversarial training approach to mitigate the effects of negative transfer by viewing the problem in a domain adaptation setting. Finally, empirical on attribute prediction multi-task on AWA and CUB datasets further validate the need for correcting negative sharing in an end-to-end manner. Advances in machine learning have led to proficient supervised learning models with powerful representations in various prediction tasks. We now expect an ideal classification model to restrict itself to a pertinent set of evidences available to it from the input for prediction. Further, we expect the model to disregard any unrelated evidences in the data to enable better generalization. Figure 1: A supervised classifier'cheetah vs. snow-leopard' that uses unrelated evidence (of habitat) over relevant evidence (of fur patterns). As shown by the pixel importance maps, the model suffers from the negative transfer prevalent in a typical animal image dataset skewed towards the animal's typical habitat and fails to generalize to rare samples. Let us consider the task of training an animal classifier "cheetah vs. snow-leopards" from a dataset of images of these animals, such as those illustrated in Figure 1 -a task which ideally should focus on the animal's appearance features. However, a large portion of these images also contain various cues of the typical habitat of the animals in the , i.e., tall grass and snow (see Figures 1 (a) and (b)) which are, in principle, unrelated to the animal's appearance. An archetypal model is deceived by the co-occurrence of such unrelated, yet easily detectable cues of habitat over the animal's appearance features such as complex fur patterns. However, a proficient supervised learning model must identify relevant evidences for the label of interest and at the same time discard various unrelated evidences such as presence of snow, even though it tends to co-occur frequently with snow-leopard. Consequently, it would be more likely that such a model would perform better on rare-instances (such as those in Figures 1 (c) and (d)) and generalize better to unseen instances. This phenomenon of co-occurring but unrelated evidences being present in training data and thereby having a debilitating effect on model performance has been described in literature BID8; BID16; BID9; BID13; BID15 ). These techniques utilize the easy availability of labels for unrelated evidences (e.g. habitat labels above), called negative labels which constitutes an auxiliary task, and seek to mitigate its debilitating performance on the primary task (e.g. animal classification above) with techniques referred to as negative-sharing or negative-transfer. While all of these techniques have tackled this problem utilizing various forms of regularization, we describe several shortcomings of this class of approaches, most notable of which is their inapplicability to the popular paradigm of trainable features obtained via neural representation learning. Motivated by these limitations, in this paper we depart from the direction of regularization-based approaches and examine methods inspired from a domain-adaptation viewpoint to propose an adversarial training-based formulation. We uniquely view such a scenario as an instance of adversarial multi-task learning, where the classification tasks are either the primary task of interest (i.e., predicting the presence of fur pattern and color) or the auxiliary negative tasks (i.e., characteristics of habitat) to be avoided. Since the 2 tasks are unrelated, any label correlation between primary and auxiliary labels in the training data is only by chance and therefore from a domain-adaptation perspective, we envision a target-domain as possibly having a different correlation between the primary and auxiliary labels. The effects of negative transfer are hence mitigated when the classification task is trained in this domain. We discuss advantages of our proposed formulation, inspired from domain-adaptation, to alleviate the negative transfer over existing techniques, including ready applicability to neural networks in an end-to-end fashion. It must be noted that, while the formulation of the problem is motivated with multi-task learning, negative-transfer is a disposition of any supervised learning task from simple binary classification to recent popular supervised tasks such as image detection, captioning, or visual dialog. We present motivating literature that prelude this work next. Image classification literature often characterize mid-level features as attribute predictors in multilabel learning. The inability of models to learn predictors with the correct semantic concept is attributed to negative transfer. To our knowledge, the predominent approach to tackle negative transfer in such setting was the use of specific regularizers BID9 BID8. Specifically, the primary model avoid using features which are important for the auxiliary task, leading to models competing for features. BID8 further extends this idea to attribute groups, where feature selection sparsity is induced across group, but encourage within them. We highlight three limitations of feature competition techniques below:• Repeated features: Consider the simple scenario where some features in the feature representations are repeated or dependent on others. Feature competition enforces tasks to pick unique features, however they are implicitly the same. Going back to the case of'cheetah vs snow leopard' example mentioned earlier, when there are two copies of a feature which captures'snow in the ', then both primary and auxiliary classifiers would just pick different copies of that feature. Here, the idea of resisting cheetah/snow leopard classifier from picking features which capture snow is negated.• Trainable features: Neural representations have become extremely popular owing the power of learning trainable features. However, feature competition techniques fail with prediction models with trainable features. If there are features that are important for both primary and auxiliary task, models involving a trainable feature setup will lead to duplicating the same features, thereby ing in the repeated feature scenario.• Easy auxiliary task with abundant features: Consider a scenario of an easy auxiliary task, that does not require a large number of features to be predicted correctly. Similar to the previous case, the spared features from the auxiliary task can be picked by the primary task ing in negative transfer. Motivated by these shortcomings, we proceed to examine the negative transfer problem in a domain adaptation setting. In this section we first formalize and explain the problem setting of negative transfer. We present scenarios where earlier attempts (regularization-based approaches) to tackle this problem fails. We then explain our formulation for posing then negative transfer problem in a domain adaptation setting and derive adversarial learning algorithms to solve negative transfer. Lastly, we present a thorough analysis of the proposed formulation to address negative transfer by experimenting on a carefully designed synthetic dataset. In typical 2-label classification problem, we assume that training data and all future test examples are drawn i.i.d from a probability distribution D on instance space X and a labeling function f p: X → {+1, −1} for the primary task labels and f a: X → {+1, −1} for the auxiliary task labels. Every instance x ∈ X has primary and auxiliary labels: y p and y a respectively. The goal is to learn a classifier which performs well on future samples from D, which may have a different label correlation since the tasks are unrelated. Formally, we capture this label correlation via a joint label distribution DISPLAYFORM0, and we assume that P (Y) in training and test are different. This problem setup is different from earlier works on negative transfer in the way how unrelated tasks are defined. We define unrelated tasks as the ones which can have different correlation in training and test data. In unrelated tasks is referred as tasks which are defined over orthogonal sets of features. BID17 uses the term negative correlation for unrelatedness among tasks. Two tasks are negatively correlated when one feature is deemed to be important to first task makes it unlikely to be important to the second task. Let the instances in training data be drawn from source distribution D S. The aim is to train a classifier on this which performs well on instances drawn from target distribution D T. Instances in D S has primary and auxiliary labels correlated with joint label distribution DISPLAYFORM0 We assume that the labelling function f p and f a are universal and common for both source and target domains. Typical unsupervised domain adaptation setting has labelled instances from source distribution and also unlabelled instances from target distribution. However, we have no information (neither labels nor instances) about the target distribution. The only information we have about the target domain is P T (Y). We are either provided an estimate of P T (Y) from an oracle, or we make an assumption on P T (Y) (for instance, a uniform distribution over the space of Y).Consider U T, the space of all distributions over X which has label distribution P T (Y). It is extremely challenging to adapt from the source domain to all members of U T with given P T (Y). However, we can settle for a particular D T. We pick such a D T ∈ U T as that distribution over X which is nearest to D S. We model this in terms of KL divergence as a constrained optimization as follows. DISPLAYFORM1 Before explaining the solution to the above optimization problem, we show the relationship between D and P (Y). To articulate our intuition, we use Figure 3 (a) which illustrates a sample source distribution D S over 2D instance space (green). The labelling functions (in this case, f p and f a) partitions the space of X into regions such that each region corresponds to a label combination (y = {y p, y a}). We denote each of these regions as R y. Notice that P (Y = y) becomes the integral of D over the region R y.Let φ S and φ T be density functions of D S and D T respectively. Then, the above optimization problem becomes: DISPLAYFORM2 Where ∆ ⊃ U T is the set of all distributions over X and region R y = {x : DISPLAYFORM3 The Lagrangian equivalent for the above problem can be stated as, DISPLAYFORM4 where λ y ∈ R. Using the Euler-Lagrange equation to solve above problem, DISPLAYFORM5 Intuitively we find that D T is a scaled version of D S with scaling factor as the ratio of P T (Y) and P S (Y). Note that this scaling factor may vary for different x ∈ X depending on which region R y it falls in. This is depicted in Figure. 3(b) which is a target distribution derived from source distribution in 3(a). The regions R −1,+1 and R +1,−1 are scaled up whereas regions R +1,+1 and R −1,−1 are scaled down. Though the above derivation is for two labels, one can see that Eq. 3 extends to any number of labels in y. As mentioned earlier in this section, we have no instances from target domain. However φ T (x) allows us to assign soft domain labels to the source domain instances, which indicates the probability of that instance belonging to the target domain. Specifically, let y D be the binary label indicating if an instance belongs to target domain. Then, DISPLAYFORM6 Two possible assumptions that could be made on P T (Y) (if not provided) are uniform distribution over space of y or uncorrelated (independent labels). Next we present two methods that leverage these soft domain labels. In the previous section we modeled negative transfer as a domain adaptation problem with soft domain labels. Next, we present methods to leverage the domain adversarial neural network (DANN) by BID3 to induce domain-invariance features. These methods are based on the theoretical analysis of domain adaptation by BID1 BID0. They provide a generalization bound on target error as following:Theorem 1 (Ben-David et al. FORMULA2) Let h ∈ H be a prediction model of the form h: X → {−1, +1} where H is the hypothesis space. Let S and T be generalization errors for source (D S) and target (D T) domains. For any classifier h ∈ H DISPLAYFORM0 Above theorem states that the target error is bounded by sum of source error and distance between source and target distributions. d H∆H can be seen as the maximum accuracy of classifying source and target domain by using a classifier from hypothesis space H∆H. Further, any classifier g ∈ H∆H is the function XOR of some h, h ∈ H. DANN introduced an objective function which minimizes both source error as well as divergence between domains. Divergence between the domains can be looked at as the prediction accuracy of domain classifier. DANN models starts with a mapping J f: X → R d with a parameter θ f, which projects instances to a latent representation space, we call this the feature extractor. These features are then mapped to primary label space with a mapping function J y (label predictor) with parameter θ y. The same features from latent representation space are mapped to domain label by J d (domain classifier), with parameter θ d. We denote the training set as DISPLAYFORM1, with every instance we are provided with a primary label y p and a soft domain label y D. The objective here is to find a feature extractor J f which projects instances to a latent representation space where achievable label prediction accuracy is high and domain prediction accuracy is low. Let L y (θ f, θ y) be the prediction loss for label prediction and let L d (θ f, θ d) be that for domain classifier, then objective function is DISPLAYFORM2 where DISPLAYFORM3 However, in using this formulation together with aforementioned soft domain labels, in a weak J D since the soft domain labels are highly skewed towards the source domain. In such settings, DISPLAYFORM4 We address this issue again in Section 4.The effort is to make J f provide a latent representation that is unable to discriminate source and target domain. As the soft domain labels are assigned according to y = {y p, y a} (see Eq. 4), if that latent representation can be used to correctly predict y p and y a with label predictors h p ∈ H and h a ∈ H, then there exists a g ∈ H∆H which can predict source and target domains well (with g(x) = h p (x) ⊕ h a (x)). Conversely, if the representations cannot be used to predict both the labels y p and y a correctly implies poor performance on domain classification. From this observation, we propose to replace domain classification loss (second loss term of Eq. 7) with auxiliary label classifier loss L a (θ f, θ a), where θ a is the parameter for the auxiliary label classifier J a. We solve the optimization problem in Eq. 7 by using gradient reversal layer BID3. Gradient reversal layer multiplies the gradient by a negative constant during the backpropagation. We call the auxiliary label classifier as an adversarial to the primary classifier. In this form, our idea is closely related to Zhang et al. FORMULA2 We extend this two label scenario to multilabels, by partitioning labels into groups, such that related labels are together. If a label is unrelated to all other labels than it forms a singleton group. We propose a model architecture, with a latent representation (with J f) for each group. Further, the latent representation of a given group must be unable to predict any task from all other groups. We achieve this by adding all label classification other than the group member as the adversarial. In the next section, we showcase the empirical performance of multitask models trained in this way. Another interesting alternative that departs from the usual view of feature projections is to utilize feature selection for J f (feature extractor). We discuss this alternative next. An intuitive method to prevent negative transfer between correlated features is to use a feature selection approach to explicitly select relevant features appropriate for a task. We use Recursive Feature Elimination (RFE) BID5 for the task of feature selection using CNN as the classifier, since wrapper method (J.) is computationally effective and noticeably time consuming when deep nets are used as the classifier. Recursive feature selection BID5 consider all the features and eliminated them at each iteration till the desired criterion is met. At each iteration, the current feature set is used to evaluate a task using a classifier. Each of the features obtain a score from the classifier, based on which one or more features are eliminated from the set. This step is repeated until the criterion is met, which can be in terms of the final number of features to be chosen or desired classification performance to be attain. In order to perform RFE, we need to score each of the feature based on its effect on the classifier. For classifiers like logistic regression, the feature importance can be obtained by the weights assigned by the classifier for each of the features. In adversarial settings for multilabel classification, importance of the k th feature can be calculated as: DISPLAYFORM0 where, |Y p | and |Y a | denote the number of labels to be predicted for the primary and the auxiliary task respectively. We study the adverse effects of negative transfer and how our proposed algorithms could resist this negative transfer, on synthetic data. Our synthetic dataset consists of 10-dimensional feature vectors, with two binary labels y p and y a (primary and auxiliary). We generate the data from a generative system with one class label distribution for training set and one for test set. First 5 features are sampled according to the primary label from a mixture of two Gaussian distributions with identity covariance matrix. If the primary label is 1 then 5 features are sampled from first Gaussian, otherwise from the second Gaussian. Similarly second set of 5 features are sampled same way from another set of mixture of two Gaussian distributions, this time sampled according to the auxiliary label. Note that for these experiments we are only interested in predicting primary label y p in the test set. In all the experiments we keep P (Y p = 1) = P (Y a = 1) = 0.5. We also Keep distance between 2 Gaussian distributions corresponding to primary label to fixed to 1.5. We use conditional label distribution P (Y p |Y a) as measure label correlation. We compare following algorithms in our experiments: baseline -A logistic regression classifier trained on primary label. Two sets of experiments where conducted to study the following aspects of negative transfer:• Difference between train and test label correlation: For this experiment, we trained a model on training set with P (Y p |Y a) = 0.8 and tested out it's performance on test set which only differs from training set in P (Y p |Y a). Figure 4.1(a) Shows the mean average precision of classifiers on test set. One can see that the baseline mAP drops as correlation in test set goes down. This indicates that baseline classifier captured wrong set of features for prediction. We can see that DadvC is performing marginally better than baseline. ALadvC and fs-adv are consistently performing well irrespective of varying label correlation in test. This shows that these methods picked the correct set of labels. • Easiness of auxiliary task: We fix P (Y p |Y a) = 0.8 for training set and P (Y p |Y a) = 0.5 for test. We vary the distance between Gaussian distribution corresponding to auxiliary label while keeping that of primary label fixed. By doing this we vary the easiness of auxiliary task. In Figure 4.1(b) we can see that, as the auxiliary task gets easier, more features related to auxiliary task will be used by the primary classifiers which in decreasing performance in test set. One can see that Both baseline and DadvC are performing bad as easiness increases. The proposed algorithms performs consistently better. We use the Animals-with-Attributes (AwA) datasets BID12 and Caltech-UCSD Birds 200-2011 (CUB) for the multilabel attribute prediction task. With both datasets we follow the experimental protocol from BID12. The Animals with Attributes-2 (AWA) consists of 30, 475 images of animals in natural settings, with 50 animals and 85 annotated attributes each. According to BID8, FORMULA13 into 10 relevant groups. The dataset is split across the animals as 27/13/10 for train, validation, and test respectively as done in BID12. The Caltech-UCSD Birds 200-2011 (CUB) consists of 11,788 images of birds, captured in natural settings, corresponding to 200 birds. The dataset also consists of 312 attributes of the birds annotated per image. They are aggregated into 28 groups corresponding to anatomical parts, and split across classes: 100/50/50 for train, validation and test over attributes, as done in BID12. The split ensures different correlations between attribute labels in the three splits, which highlights the problems of negative-transfer. We directly utilize the the feature representation (of length 2048) obtained from ResNet-101 BID6 model that is pre-trained on ImageNet BID2 for AwA and CUB. We explain below the specific architectures and parameters used for each experiment. Mean average precision (mAP) on validation and test sets are reported. Attribute prediction is an unbalanced prediction task, as individual attributes are rare. We use a balance corrected binary crossentropy loss function for all experiments, with the balance count obtained from the training set. Further, we utilize early stopping criteria based on the performance of the model on the validation set. LR, LR+FS and LR+FS-adv: As discussed in Section 3.3.1, one way to prevent the adverse effects of negative transfer is by selecting the optimal feature set for each of the task, based on the proposed adversarial objective function. We use Recursive Feature Elimination (RFE) BID5 method for the task of feature selection wrapped over a Logistic Regression (LR). We perform feature selection experimentation using Logistic Regression (instead of multi-layer perception, as The CUB dataset consists of 28 groups, increasing number of adversarial tasks from most correlated to least helps improve performance on test splits. done in next set of experiments) for time efficiency. For CUB, we transform the 2048 features into a 500 dimensional space by using a dense layer followed by ReLU, which is then used for feature selection. Based on the primary task at hand, the feature importance scores are calculated using Eq. 8, by using the learned weights of LR. At each iteration, we remove λ × 0.33 numbers of features, where we decrease λ at each iteration as: 0.95 × λ. The final number of features to be retained is decided using the validation based on the mAP. We also observe the performance of feature selection without the adversarial setting (LR+fs). In this case, we only select the features that perform well for the primary task, without considering any effect on the auxiliary tasks. Performances of RFE have been observed in both adversarial and non-adversarial settings, which is reported in TAB0, and group-wise test mAP in Figures 4(a) and 5 (a) respectively. As shown in TAB0, the mAP of RFE improves the baseline accuracy when LR has been used (compare rows 2 and 3 with row 1). The efficiency of the proposed criterion for scoring features, as shown by equation 8, can be observed when we compare between rows 2 and 3 in the Table. There is an improvement of 2.3% and 45% compared to LR (row 1), and 5.7% and 31.8% compared to LR+FS (w/o adv) (row 2) for AwA and CUB datasets respectively showcasing the advantage of the proposed method (LR+FS-adv). The average number of features selected are 321 (out of 2048) and 140 (out of 500) for AwA and CUB respectively. It is evident from the Table that the performance of MLP and ALadvC outperforms LR+FS with adv by a large margin. The performance can be attributed to MLP and ALadvS can apply projections to the feature vectors rather than selection/omission. We test the performance of the proposed adversarial approach as described in Section 3.3. We utilize the representation vector obtained from ResNet-101 as the base model. Next, we attach a trainable layers (AwA:500, CUB:500 100) with ReLU. Next, we add latent representation per group (AwA:10, CUB:5) with a linear connect to the group specific attribute prediction (with sigmoid activation). The smaller size of the latent layer ensures high feature transfer leading to more negative transfer. The baseline model (MLP) and the proposed model with auxiliary labels for each group as an adversarial classifier (ALadvC) are both trained with learning rate of lr = 0.01, which is decayed exponentially. Additionally, the gradient reversal weight λ = K 1+exp −10 * li, is related to the i th step of training, where l i = i/num steps. The scheduler increases the weight exponentially towards K (described in BID4). The model configuration (number and size of intermediate layers, K, lr) have all been picked from a large parameter sweep for best validation error. Due to the large number of adversarial branches (28 groups corresponding to 27 adversarial branches per group), we utilize the pairwise-task-label overlap to threshold the number of tasks (details below). TAB0 ALadvC performance improves by 2.8% and a 6.8% mAP improvements on the validation and test splits for AwA, respectively. Similarly, 4.8% and 3.5% improvements are observed in CUB dataset. Note that the splits ensure different correlations between attributes in the validation and test splits. Further, 4 (b) and 5 (b), show the effect of the gradient reversal on the learning process as the λ progressively improves. We notice a drop in the train mAP and simultaneous improvement over test mAP as the model sheds improvement obtained from negative transfer. Ultimately the performance on both datasets improves, suggesting the overall adverse effect of negative transfer. The group-wise performance measured on the test sets show drop in mAP for certain groups. We assert that the drop represents the true performance of the model on these tasks. As mentioned above, we identify an additional imbalance in the attribute prediction task, we term class-combination imbalance. The measure indicates the pairwise co-concurrence of attributes, which is computed as the pairwise Jaccard similarity between each attribute (illustration of the maximum Jaccard similarity cross group for both datasets in Figure 6, for their corresponding train splits). We utilize the measure to identify the subset of adversarial tasks per group for the CUB dataset. As shown in FIG4 (c), the performance of ALadvC improves when all group tasks are utilized. The empirical eludes to the advantage of the number or hardness of the adversarial task in improving prediction performance. In this work, we show that adversarial learning is the natural answer to prevent negative transfer. This leads to potential improvement in any supervised learning of natural data that is seeking generalization. We find that even in relatively straight-forward linear models presented above, co-occurrence of unrelated labels hampers performance and must be explicitly treated. We address the problem of negative transfer in a multi-task scenario, and also show the applicability of our solution in any supervised task. Supervised learning practitioners can utilize domain expertise to acquire and leverage additional negative labels for this purpose. Recent work in explainability of machine learning models can also be appropriately leveraged to facilitate this task. | We look at negative transfer from a domain adaptation point of view to derive an adversarial learning algorithm. | 1,240 | scitldr |
Recent theoretical and experimental suggest the possibility of using current and near-future quantum hardware in challenging sampling tasks. In this paper, we introduce free-energy-based reinforcement learning (FERL) as an application of quantum hardware. We propose a method for processing a quantum annealer’s measured qubit spin configurations in approximating the free energy of a quantum Boltzmann machine (QBM). We then apply this method to perform reinforcement learning on the grid-world problem using the D-Wave 2000Q quantum annealer. The experimental show that our technique is a promising method for harnessing the power of quantum sampling in reinforcement learning tasks. Reinforcement learning (RL) BID33; BID6 has been successfully applied in fields such as engineering BID11; BID35, sociology BID12; BID30, and economics BID22; BID31. The training samples in reinforcement learning are provided by the interaction of an agent with an ambient environment. For example, in a motion planning problem in uncharted territory, it is desirable for the agent to learn to correctly navigate in the fastest way possible, making the fewest blind decisions. That is, neither exploration nor exploitation can be pursued exclusively without either facing a penalty or failing at the task. Our goal is, therefore, not only to design an algorithm that eventually converges to an optimal policy, but for the algorithm to be able to generate suboptimal policies early in the learning process. Free-energy-based reinforcement learning (FERL) using a restricted Boltzmann machine (RBM), as suggested by BID27, relies on approximating a utility function for the agent, called the Q-function, using the free energy of an RBM. RBMs have the advantage that their free energy can be efficiently calculated using closed formulae. RBMs can represent any joint distribution over binary variables BID20; BID15; Le BID19; however, this property of universality may require exponentially large RBMs BID20; Le BID19.General Boltzmann machines (GBM) are proposed in an effort to devise universal Q-function approximators with polynomially large Boltzmann networks BID10. Traditionally, Monte Carlo simulation is used to perform the computationally expensive tasks of approximating the free energy of GBMs under a Boltzmann distribution. One way to speed up the approximation process is to represent a GBM by an equivalent physical system and try to find its Boltzmann distribution. An example of such a physical system is a quantum annealer consisting of a network of pair-wise interacting quantum bits (qubits). Although quantum annealers have already been used in many areas of computational science, including combinatorial optimization and machine learning, their application in RL has not been explored. In order to use quantum annealing for RL, we first represent the Q-function as the free energy of a physical system, that is, that of a quantum annealer. We then slowly evolve the state of the physical system from a well-known initial state toward a state with a Boltzmann-like probability distribution. Repeating the annealing process sufficiently long can provide us with samples from the Boltzmann distribution so that we can empirically approximate the free energy of the physical system under this distribution. Finally, approximating the free energy of the system would give us an estimate of the Q-function. Up until the past few years, studies were limited to the classical Boltzmann machines. 1 Recently, BID10 generalized the classical method toward a quantum or quantum-inspired algorithm for approximating the free energy of GBMs. Using simulated quantum annealing (SQA) BID10 showed that FERL using a deep Boltzmann machine (DBM) can provide a drastic improvement in the early stages of learning, yet performing the same procedure on an actual quantum device remained a difficult task. This is because sampling from a quantum system representing a quantum Boltzmann machine is harder than the classical case, since at the end of each anneal the quantum system is in a superposition. Any attempt to measure the final state of the quantum system is doomed to fail since the superposition would collapse into a classical state that does not carry the entirety of information about the final state. In this work, we have two main contributions. We first employ a quantum annealer as a physical device to approximate the free energy of a classical Boltzmann machine. Second, we generalize the notion of classical Boltzmann machines to quantum Boltzmann machines within the field of RL and utilize a quantum annealer to approximate the free energy of a quantum system. In order to deal with the issue of superposition mentioned above, we propose a novel stacking procedure in that we attempt to reconstruct the full state of superposition from the partial information that we get from sampling after each anneal. Finally we report proof-of-concept using the D-Wave 2000Q quantum processor to provide experimental evidence for the applicability of a quantum annealer in reinforcement learning as predicted by BID10. We refer the reader to BID33 and BID37 for an exposition on Markov decision processes (MDP), controlled Markov chains, and the various broad aspects of reinforcement learning. A Q-function is defined by mapping a tuple pπ, s, aq of a given stationary policy π, a current state s, and an immediate action a of a controlled Markov chain to the expected value of the instantaneous and future discounted rewards of the Markov chain that begins with taking action a at initial state s and continuing according to π: Qpπ, s, aq " Err ps, aqs`E DISPLAYFORM0 Here, rps, aq is a random variable, perceived by the agent from the environment, representing the immediate reward of taking action a from state s, and Π is the Markov chain ing from restricting the controlled Markov chain to the policy π. The fixed real number γ P p0, 1q is the discount factor of the MDP. From Q˚ps, aq " max π Qpπ, s, aq, the optimal policy for the MDP can be retrieved via π˚psq " argmax a Q˚ps, aq. This reduces the MDP task to that of computing Q˚ps, aq. Through the Bellman optimality equation BID4, we get Q˚ps, aq " Err ps, aqs`γ ÿ DISPLAYFORM1 so Q˚is the fixed point of the following operator defined on L 8 pSˆAq:T pQq: ps, aq Þ Ñ Err ps, aqs`γ ż max a 1 In this paper, we focus on the TD Q-learning method, with the Q-function parametrized by neural networks in order to find π˚psq and Q˚ps, aq, which is based on minimizing the distance between T pQq and Q. A clamped Boltzmann machine is a GBM in which all visible nodes v are prescribed fixed assignments and removed from the underlying graph. Therefore, the energy of the clamped Boltzmann machine may be written as DISPLAYFORM0 where V and H are the sets of visible and hidden nodes, respectively, and by a slight abuse of notation, the letter v stands both for a graph node v P V and for the assignment v P t0, 1u. The interactions between the variables represented by their respective nodes are specified by real-valued weighted edges of the underlying undirected graph represented by w vh, and w hh 1 denotes the weights between visible and hidden, or hidden and hidden, nodes of the Boltzmann machine, respectively. A clamped quantum Boltzmann machine (QBM) has the same underlying graph as a clamped GBM, but instead of a binary random variable, qubits are associated to each node of the network. The energy function is substituted by the quantum Hamiltonian of an induced transverse field Ising model (TFIM), which is mathematically a Hermitian matrix DISPLAYFORM1 where σ z h represent the Pauli z-matrices and σ x h represent the Pauli x-matrices. Thus, a clamped QBM with Γ " 0 is equivalent to a clamped classical Boltzmann machine. This is because, in this case, H v is a diagonal matrix in the σ z -basis, the spectrum of which is identical to the range of the classical Hamiltonian. We note that is a particular instance of a TFIM. Let us begin with the classical Boltzmann machine case. Following BID27, for an assignment of visible variables v, F pvq denotes the equilibrium free energy, and is given via where β " 1 k B T is a fixed thermodynamic beta. In BID27, it was proposed to use the negative free energy of a GBM to approximate the Q-function through the relationship Qps, aq «´F ps, aq "´F ps, a; wq for each admissible state-action pair ps, aq P SˆA. Here, s and a are binary vectors encoding the state s and action a on the state nodes and action nodes, respectively, of a GBM. In RL, the visible nodes of a GBM are partitioned into two subsets of state nodes S and action nodes A. Here, w represents the vector of weights of a GBM as in. Each entry w of w can now be trained using the TD update rule: DISPLAYFORM0 ∆w vh " εpr n ps n, a n q`γQps n`1, a n`1 q´Qps n, a n qqvxhy and ∆w hh 1 " εpr n ps n, a n q`γQps n`1, a n`1 q´Qps n, a n qqxhh 1 y,where xhy and xhh 1 y are the expected values of the variables and the products of the variables, respectively, in the binary encoding of the hidden nodes with respect to the Boltzmann distribution of the classical Hamiltonian. 1 k B T be a fixed thermodynamic beta as in the classical case. As before, for an assignment of visible variables v, F pvq denotes the equilibrium free energy, and is given via DISPLAYFORM1 Here, Z v " trpe´β Hv q is the partition function of the clamped QBM and ρ v is the density matrix DISPLAYFORM2 Hv. The term´trpρ v ln ρ v q is the entropy of the system. Note that FORMULA7 is a generalization of. The notation x¨¨¨y is used for the expected value of any observable with respect to the Gibbs measure (i.e., the Boltzmann distribution), in particular, DISPLAYFORM3 This is also a generalization of the weighted sum ř h Pph|vqE v phq in. Inspired by the ideas of BID27 and BID1, we use the negative free energy of a QBM to approximate the Q-function exactly as in the classical case:Qps, aq «´F ps, a; wq for each admissible state-action pair ps, aq P SˆA. As before, s and a are binary vectors encoding the state s and action a on the state nodes and action nodes, respectively, of a Boltzmann machine. In RL, the visible nodes of a Boltzmann machine are partitioned into two subsets of state nodes S and action nodes A. Here, w represents the vector of weights of a QBM as in. Each entry w of w can now be trained using the TD update rule:∆w "´εpr n ps n, a n q´γF ps n`1, a n`1 q`F ps n, a n qq BF Bw.As shown in BID10, from we obtain ∆w vh " εpr n ps n, a n q γF ps n`1, a n`1 q`F ps n, a n qqvxσ z h y and ∆w hh 1 " εpr n ps n, a n q γF ps n`1, a n`1 q`F ps n, a n qqxσ z h σ z h 1 y. This concludes the development of the FERL method using QBMs. We refer the reader to Algorithm 3 in BID10 for more details. What remains to be done is to approximate values of the free energy F ps, aq and also the expected values of the observables xσ z h y and xσ z h σ z h 1 y. In this paper, we demonstrate how quantum annealing can be used to address this challenge. The evolution of a quantum system under a slowly changing, time-dependent Hamiltonian is characterized by BID7. The quantum adiabatic theorem (QAT) in BID7 states that a system remains in its instantaneous steady state, provided there is a gap between the eigen-energy of the steady state and the rest of the Hamiltonian's spectrum at every point in time. QAT motivated BID13 to introduce a paradigm of quantum computing known as quantum adiabatic computation which is closely related to the quantum analogue of simulated annealing, namely quantum annealing (QA), introduced by BID18.The history of QA and QAT inspired manufacturing efforts towards physical realizations of adiabatic evolution via quantum hardware BID17. In reality, the manufactured chips are operated at a non-zero temperature and are not isolated from their environment. Therefore, the existing adiabatic theory does not cover the behaviour of these machines. A contemporary investigation in quantum adiabatic theory was therefore initiated to study adiabaticity in open quantum systems BID28; BID36; Albash et al. FORMULA1; BID2; BID3. These sources prove adiabatic theorems for open quantum systems under various assumptions, in particular when the quantum system is coupled to a thermal bath satisfying the Kubo-Martin-Schwinger condition, implying that the instantaneous steady state is the instantaneous Gibbs state. This work in progress shows promising opportunities to use quantum annealers to sample from the Gibbs state of a TFIM. In practice, due to additional complications (e.g., level crossings and gap closure, described in the references above), the samples gathered from the quantum annealer are far from the Gibbs state of the final Hamiltonian. In fact, BID0 suggests that the distribution of the samples would instead correspond to an instantaneous Hamiltonian at an intermediate point in time, called the freeze-out point. Unfortunately, this point and, consequently, the strength Γ of the transverse field at this point, is not known a priori, and also depends on the TFIM undergoing evolution. Our goal is simply to associate a single (average) virual Γ to all TFIMs constructed through FERL. Another unknown parameter is the inverse temperature β, at which the Gibbs state, the partition function, and the free energy are attained. In a similar fashion, we wish to associate a single virtual β to all TFIMs encountered. The quantum annealer used in our experiments is the D-Wave 2000Q, which consists of a chip of superconducting qubits connected to each other according to a sparse adjacency graph called the Chimera graph. The Chimera graph structure looks significantly different from the frequently used models in machine learning, for example, RBMs and DBMs, which consist of consecutive fully connected bipartite graphs. FIG0 shows two adjacent blocks of the Chimera graph which consist of 16 qubits, which, in this paper, serve as the clamped QBM used in FERL.Another complication when using a quantum annealer as a QBM is that the spin configurations of the qubits can only be measured along a fixed axis (here the z-basis of the Bloch sphere). Once σ z is measured, all of the quantum information related to the projection of the spin along the transverse field (i.e., the spin σ x) collapses and cannot be retrieved. Therefore, even with a choice of virtual Γ, virtual β, and all of the measured configurations, the energy of the TFIM is still unknown. We propose a method for overcoming this challenge based on the Suzuki-Trotter expansion of the TFIM, which we call replica stacking, the details of which are explained in §3.4. In §4, we perform a grid search over values of the virtual parameters β and Γ. The accepted virtual parameters are the ones that in the most-effective learning for FERL in the early stages of training. By the Suzuki-Trotter decomposition BID34, the partition function of the TFIM defined by the Hamiltonian can be approximated using the partition function of a classical Hamiltonian denoted by H eff v and called an effective Hamiltonian, which corresponds to a classical Ising model of one dimension higher. More precisely, where r is the number of replicas, w`" 1 2β log coth´Γ β r¯, and h k represent spins of the classical system of one dimension higher. Note that each hidden node's Pauli z-matrices σ z h are represented by r classical spins, denoted by h k, with a slight abuse of notation. In other words, the original Ising model with a non-zero transverse field represented through non-commuting operators can be mapped to a classical Ising model of one dimension higher. FIG1 shows the underlying graph of a TFIM on a two-dimensional lattice and a corresponding 10-replica effective Hamiltonian in three dimensions. DISPLAYFORM0 The intuition behind the Suzuki-Trotter decomposition is that the superposition of the spins in a quantum system is represented classically by replicas in the z-basis. In other words, the measurement of the quantum system in the z-basis is interpreted as choosing one replica at random. Note that the probabilities of measuring`1 or´1 for each individual spin are preserved. This way, each hidden node in the quantum Boltzmann machine carries more information than a classical one; in fact, a classical representation of this system requires r classical binary units via the Suzuki-Trotter decomposition. Consequently, the connections between the hidden nodes become more complicated in the quantum case as well and can carry more information on the correlations between the hidden nodes. Note that the coupling strengths between the replicas are not arbitrary, but come from the mathematical decomposition following the Suzuki-Trotter formula. As a , the quantum Boltzmann machine can be viewed as an undirected graphical model but in one dimension higher than the classical Boltzmann machine. In the case of classical GBMs without further restrictions on the graph structure, xhy, xhh 1 y, and Qps, aq «´F ps, a; wq are not tractable. Consequently, to perform the weight update in one requires samples from the Boltzmann distribution corresponding to energy function to estimate xhy, xhh 1 y, and F ps, a; wq empirically. To approximate the right-hand side of and FORMULA1, we sample from the Boltzmann distribution of the energy function represented by the effective Hamiltonian using (, Theorem 6). We find the expected values of the observables xσ One way to sample spin values from the Boltzmann distribution of the effective Hamiltonian is to use the simulated quantum annealing algorithm (SQA) (see for an introduction). SQA is one of the many flavours of quantum Monte Carlo methods, and is based on the Suzuki-Trotter expansion described above. This algorithm simulates the quantum annealing phenomena of a TFIM by slowly reducing the strength of the transverse field at finite temperature to the desired target value. In our implementation, we have used a single spin-flip variant of SQA with a linear transverse-field schedule as in BID21 and BID14. Experimental studies have shown similarities in the behaviour of SQA and that of quantum annealing Isakov et al. The classical counterpart of SQA is conventional simulated annealing (SA), which is based on thermal annealing. This algorithm can be used to sample from Boltzmann distributions that correspond to an Ising spin model in the absence of a transverse field. Unlike SA, it is possible to use SQA not only to approximate the Boltzmann distribution of a classical Boltzmann machine, but also that of a quantum Hamiltonian in the presence of a transverse field. This can be done by reducing the strength of the transverse field to the desired value defined by the model, rather than to zero. It has been proven by BID24 that the spin system defined by SQA converges to the Boltzmann distribution of the effective classical Hamiltonian of one dimension higher that corresponds to the quantum Hamiltonian. Therefore, it is straightforward to use SQA to approximate the free energy in as well as the observables xσ z h y and xσ z h σ z h 1 y. However, any Boltzmann distribution sampling method based on Markov chain Monte Carlo (MCMC) has the major drawback of being extremely slow and computationally involved. Actually, it is an NP-hard problem to sample from the Boltzmann distribution. Another option is to use variational approximation BID26, which suffers from lack of accuracy and works in practice only in limited cases. As explained above, quantum annealers have the potential to provide samples from Boltzmann distributions (in the z-basis) corresponding to TFIM in a more efficient way. In what follows, we explain how to use quantum annealing to approximate the free energy corresponding to an effective Hamiltonian which in turn can be used to approximate the free energy of a QBM. (, Theorem 6) and translation invariance, each replica of the effective classical model is an approximation of the spin measurements of the TFIM in the measurement bases σ z. Therefore, a σ z -configuration sampled by a quantum annealer that operates at a given virtual inverse temperature β, and anneals up to a virtual transverse-field strength Γ, may be viewed as an instance of a classical spin configuration from a replica of the classical effective Hamiltonian of one dimension higher. This suggests the following method to approximate the free energy from for a TFIM. We gather a pool C of configurations sampled by the quantum annealer for the TFIM considered, allowing repetitions. Let r be the number of replicas. We write c eff " pc 1,..., c r q to indicate an effective configuration c eff with the classical configurations c 1 to c r as its replicas. We write c eff to denote the underlying set tc 1,..., c r u of replicas of c eff (without considering their ordering). We have P rc eff " pc 1,..., c r qs " P " c eff " pc 1,..., c r q|c eff " tc 1,..., c r u ‰ˆP " c eff " tc 1,..., c r u ‰ " P " c eff " pc 1,..., c r q|c eff " tc 1,..., c r u ‰ˆP " c eff " tc 1,..., c r u|c eff Ď C ‰ˆP " c eff Ď C ‰.The argument in the previous paragraph can now be employed to allow the assumption P " c eff Ď C ‰ » 1. In other words, the probability mass function of the effective configurations is supported in the subset of those configurations synthesized from the elements of C as candidate replicas. The conditional probability Prc eff " tc 1,..., c r u|c eff Ď C s can be sampled from by drawing r elements c 1,..., c r from C. We then sample from P " c eff " pc 1,..., c r q|c eff " tc 1,..., c r u ‰, according to the following distribution over c eff: DISPLAYFORM0 We consider πpc eff q our target distribution and construct the following MCMC method for which the limiting distribution is πpc eff q. We first attach the r classical spin configurations to the SQA's effective configuration structure uniformly at random. We then transition to a different arrangement with a Metropolis acceptance probability. For example, we may choose two classical configurations at random and exchange them with probability DISPLAYFORM1 where Epc eff q " w`ř h´ř r´1 k"1 h c k h c k`1`h c1 h cr¯. Such a stochastic process is known to satisfy the detailed balance condition. Consequently, the MCMC method allows us to sample from the effective spin configurations. This procedure of sampling and then performing the MCMC method creates a pool of effective spin configurations, which are then employed in equation FORMULA1 in order to approximate the free energy of the TFIM empirically. However, we consider a relatively small number of hidden nodes in our experiments, so the number of different σ z -configurations sampled by the quantum annealer is limited. As a consequence, there is no practical need to perform the MCMC method defined above. Instead, we attach classical spin configurations from the pool to the SQA effective configuration structure at random. In other words, in r iterations, a spin configuration is sampled from the pool of classical spin configurations described above and inserted as the next replica of the effective classical Hamiltonian consisting of r replicas. It is worthwhile to reiterate that this replica stacking technique yields an undirected graphical model. Specifically, the structure described in FIG1 is an undirected graphical model in the space of hidden nodes, where the node statistics are obtained from the Boltzmann distribution. One difference between this model and a classical Boltzmann machine is that each hidden node activation is governed by a series of r replicas in one dimension higher, and the undirected, replica-to-replica connections calculated therein. Moreover, the energy function of this extended model differs from the energy function of the classical Boltzmann machine (compare and FORMULA3 ). The free energy of the extended graphical model serves as the function approximator to the Q-function. We benchmark our various FERL methods on a 3ˆ5 grid-world problem BID32 with an agent capable of taking the actions up, down, left, or right, or standing still, on a grid-world with y, andPpc ef f |s i, a i q using Algorithm 2, for (i " 1, 2) calculate F ps i, a i q using for (i " 1, 2) Qps i, a i q дF ps i, a i q for pi " 1, 2q update QBM weights using and πps 1 q Ð argmax a Qps 1, aq end for return π Algorithm 2 Replica stacking initialize the structure of the effective Hamiltonian in one dimension higher for i " 1, 2,..., m do for j " 1, 2,..., r do obtain spin configuration sample in z-basis from QA attach this spin configuration to j-th replica of the i-th effective configuration structure end for perform the MCMC technique described in §3.4 with transition probabilities to obtain the i-th instance of effective spin configurations end for obtain xH eff s i,a i y from the average energy of the m effective spin configurations obtain xhy and xhh 1 y by averaging over all h and h 1 replicas in each spin configuration gather statistics from Ppc eff |s i, a i q using the m effective spin configurations return xhy, xhh 1 y, xH DISPLAYFORM0 y, and Ppc eff |s i, a i q one deterministic reward, one wall, and one penalty, as shown in FIG5. The task is to find an optimal policy, as shown in FIG5, for the agent at each state in the grid-world. All of the Boltzmann machines used in our algorithms consist of 16 hidden nodes. The discount factor, as explained in §2, is set to 0.8. The agent attains the reward R " 200 in the top-left corner, the neutral value of moving to any empty cell is 100, and the agent is penalized by not receiving any reward if it moves to the penalty cell with value P " 0.For T r independent runs of every FERL method, T s training samples are used. The fidelity measure at the i-th training sample is defined by fidelitypiq " pT rˆ| S|q´1 DISPLAYFORM1 where π˚denotes the best known policy and Aps, i, lq denotes the action assigned at the l-th run and i-th training sample to the state s. In our experiments, each algorithm is run 100 times. An optimal policy for this problem instance can be represented as a selection of directional arrows indicating movement directions. Fig. 4 demonstrates the performance of a fully connected deep Q-network BID23 consisting of an input layer of 14 state nodes, two layers of eight hidden nodes each, and an output layer of five nodes representing the values of the Q-function for different actions, given a configuration of state nodes. We use the same number of hidden nodes in the fully connected deep Q-network as in the other networks described in this paper. We treat the network of superconducting qubits represented in FIG0 as a clamped QBM with two hidden layers, represented using blue and red colours. The state nodes are considered fully connected to the blue qubits and the action nodes are fully connected to the red qubits. For a choice of virtual parameters Γ ‰ 0 and β, which appear in FORMULA1 and FORMULA1, and for each query to the D-Wave 2000Q chip, we construct 150 effective classical configurations of one dimension higher, out of a pool of 3750 reads, according to the replica stacking method introduced in §3.4. The 150 configurations are, in turn, employed to approximate the free energy of the quantum Hamiltonian. We conduct 10 independent runs of FERL in this fashion, and find the average fidelity over the 10 runs and over the T s " 300 training samples. Fig. 6 shows the growth of the average fidelity of the best known policies generated by different FERL methods. For each method, the fidelity curve is an average over 100 independent runs, each with T s " 500 training samples. In this figure, the "D-Wave Γ " 0.5, β " 2.0" curve corresponds to the D-Wave 2000Q replica stacking-based method with the choice of the best virtual parameters Γ " 0.5 and β " 2.0, as shown in the heatmap in FIG6. The training is based on formulae,, and. The "SQA Bipartite Γ " 0.5, β " 2.0" and "SQA Chimera Γ " 0.5, β " 2.0" curves are based on the same formulae with the underlying graphs being a bipartite (DBM) and a Chimera graph, respectively, with the same choice of virtual parameters, but the effective Hamiltonian configurations generated using SQA as explained in §3.3.The "SA Bipartite β " 2.0" and "SA Chimera β " 2.0" curves are generated by using SA to train a classical DBM and a classical GBM on the Chimera graph, respectively, using formulae, FORMULA6, and. SA is run with a linear inverse temperature schedule, where β " 2.0 indicates the final value. The "D-Wave Classical β " 2.0" curve is generated using the same method, but with samples obtained using the D-Wave 2000Q. The "RBM" curve is generated using the method in BID27. We solve the grid-world problem using various Q-learning methods with the Q-function parametrized by different neural networks. For comparison, we demonstrate the performance of a fully connected deep Q-network method that can be considered state of the art. This method efficiently processes every training sample, but, as shown in Fig. 4, requires a very large number of training samples to converge to the optimal policy. Another conventional method is free-energy-based RL using an RBM. This method is also very successful at learning the optimal policy at the scale of the RL task considered in our experiment. Although this method does not outperform other FERL methods that take advantage of a highly efficient sampling oracle, the processing of each training sample is efficient, as it is based on closed formulae. In fact, for the size of problem considered, the RBM-based FERL outperforms the fully connected deep Q-network method. The comparison of in Fig. 6 suggests that replica stacking is a successful method for estimating effective classical configurations obtained from a quantum annealer, given that the spins can only be measured in measurement bases. For practical use in RL, this method provides a means of treating the quantum annealer as a QBM. FERL using the quantum annealer, in conjunction with the replica stacking technique, provides significant improvement over FERL using classical Boltzmann machines. The curve representing SQA-based FERL using a Boltzmann machine on the Chimera graph is almost coincident with the one obtained using the D-Wave 2000Q, whereas the SQA-based FERL using a DBM slightly outperforms it. This suggests that quantum annealing chips with greater connectivity and more control over annealing time can further improve the performance of the replica stacking method applied to RL tasks. This is further supported by comparing the performance of SA-based FERL using a DBM versus SA-based FERL using the Chimera graph. This shows that DBM is, due to its additional connections, a better choice of neural network compared to the Chimera graph. For practical reasons, we aim to associate an identical choice of virtual parameters β and Γ to all of the TFIMs constructed using FERL. BID5 and BID25 provide methods for estimating the effective inverse temperature β for other applications. However, in both studies, the samples obtained from the quantum annealer are matched to the Boltzmann distribution of a classical Ising model. In fact, the transverse-field strength is a second virtual parameter that we consider. The optimal choice Γ " 0.5 corresponds to 2{3 of the annealing time, in agreement with the work of BID0, who also considers TFIM with 16 qubits. The agreement of FERL using quantum annealer reads treated as classical Boltzmann samples with that of FERL using SA and classical Boltzmann machines suggests that, at least for this task and this size of Boltzmann machine, the measurements provided by the D-Wave 2000Q can be considered good approximations of Boltzmann distribution samples of classical Ising models. The extended undirected graphical model developed in this paper using the replica stacking method is not limited to Q-function approximation in RL tasks. Potentially, this method can be applied to tasks where Boltzmann machines can be used. This method provides a mechanism for approximating the activations and partition functions of quantum Boltzmann machines that have a significant transverse field. In this paper, we describe a free-energy-based reinforcement learning algorithm using an existing quantum annealer, namely the D-Wave 2000Q. Our method relies on the Suzuki-Trotter decomposition and the use of the measured configurations by the D-Wave 2000Q as replicas of an effective classical Ising model of one dimension higher. The presented here are first-step proofs of concept of a proposed quantum algorithm with a promising path towards outperforming reinforcement learning algorithms devised for digital hardware. Given appropriate advances in quantum annealing hardware, future research can employ the proposed principles to solve larger-scale reinforcement learning tasks in the emerging field of quantum machine learning. | We train Quantum Boltzmann Machines using a replica stacking method and a quantum annealer to perform a reinforcement learning task. | 1,241 | scitldr |
Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs. However, under the black-box setting, most existing adversaries often have a poor transferability to attack other defense models. In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods to improve the transferability of adversarial examples, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM). NI-FGSM aims to adapt Nesterov accelerated gradient into the iterative attacks so as to effectively look ahead and improve the transferability of adversarial examples. While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid "overfitting” on the white-box model being attacked and generate more transferable adversarial examples. NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models. Empirical on ImageNet dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks. Deep learning models have been shown to be vulnerable to adversarial examples ), which are generated by applying human-imperceptible perturbations on benign input to in the misclassification. In addition, adversarial examples have an intriguing property of transferability, where adversarial examples crafted by the current model can also fool other unknown models. As adversarial examples can help identify the robustness of models , as well as improve the robustness of models by adversarial training, learning how to generate adversarial examples with high transferability is important and has gained increasing attentions in the literature. Several gradient-based attacks have been proposed to generate adversarial examples, such as onestep attacks and iterative attacks (; . Under the white-box setting, with the knowledge of the current model, existing attacks can achieve high success rates. However, they often exhibit low success rates under the black-box setting, especially for models with defense mechanism, such as adversarial training (; and input modification). Under the black-box setting, most existing attacks fail to generate robust adversarial examples against defense models. In this work, by regarding the adversarial example generation process as an optimization process, we propose two new methods to improve the transferability of adversarial examples: Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM). • Inspired by the fact that Nesterov accelerated gradient is superior to momentum for conventionally optimization , we adapt Nesterov accelerated gradient into the iterative gradient-based attack, so as to effectively look ahead and improve the transferability of adversarial examples. We expect that NI-FGSM could replace the momentum iterative gradient-based method in the gradient accumulating portion and yield higher performance. • Besides, we discover that deep learning models have the scale-invariant property, and propose a Scale-Invariant attack Method (SIM) to improve the transferability of adversarial examples by optimizing the adversarial perturbations over the scale copies of the input images. SIM can avoid "overfitting" on the white-box model being attacked and generate more transferable adversarial examples against other black-box models. • We found that combining our NI-FGSM and SIM with existing gradient-based attack methods (e.g., diverse input method ) can further boost the attack success rates of adversarial examples. Extensive experiments on the ImageNet dataset show that our methods attack both normally trained models and adversarially trained models with higher attack success rates than existing baseline attacks. Our best attack method, SI-NI-TI-DIM (Scale-Invariant Nesterov Iterative FGSM integrated with translation-invariant diverse input method), reaches an average success rate of 93.5% against adversarially trained models under the black-box setting. For further demonstration, we evaluate our methods by attacking the latest robust defense methods;;; ). The show that our attack methods can generate adversarial examples with higher transferability than state-of-theart gradient-based attacks. 2.1 NOTATION Let x and y true be a benign image and the corresponding true label, respectively. Let J(x, y true) be the loss function of the classifier (e.g. the cross-entropy loss). Let x adv be the adversarial example of the benign image x. The goal of the non-targeted adversaries is to search an adversarial example x adv to maximize the loss J(x adv, y true) in the p norm bounded perturbations. To align with previous works, we focus on p = ∞ in this work to measure the distortion between x adv and x. That is x adv − x ∞ ≤, where is the magnitude of adversarial perturbations. Several attack methods have been proposed to generate adversarial examples. Here we provide a brief introduction. generates an adversarial example x adv by maximizing the loss function J(x adv, y true) with one-step update as: where sign(·) function restricts the perturbation in the L ∞ norm bound. Iterative Fast Gradient Sign Method (I-FGSM). extend FGSM to an iterative version by applying FGSM with a small step size α: where Clip x (·) function restricts generated adversarial examples to be within the -ball of x. Projected Gradient Descent (PGD). PGD attack is a strong iterative variant of FGSM. It consists of a random start within the allowed norm ball and then follows by running several iterations of I-FGSM to generate adversarial examples. Momentum Iterative Fast Gradient Sign Method (MI-FGSM). integrate momentum into the iterative attack and lead to a higher transferability for adversarial examples. Their update procedure is formalized as follows: where g t is the accumulated gradient at iteration t, and µ is the decay factor of g t. Diverse Input Method (DIM). optimize the adversarial perturbations over the diverse transformation of the input image at each iteration. The transformations include the random resizing and the random padding. DIM can be naturally integrated into other gradient-based attacks to further improve the transferability of adversarial examples. Translation-Invariant Method (TIM). Instead of optimizing the adversarial perturbations on a single image, use a set of translated images to optimize the adversarial perturbations. They further develop an efficient algorithm to calculate the gradients by convolving the gradient at untranslated images with a kernel matrix. TIM can also be naturally integrated with other gradientbased attack methods. The combination of TIM and DIM, namely TI-DIM, is the current strongest black-box attack method. Carlini & Wagner attack (C&W). C&W attack is an optimization-based method which directly optimizes the distance between the benign examples and the adversarial examples by solving: arg min It is a powerful method to find adversarial examples while minimizing perturbations for white-box attacks, but it lacks the transferability for black-box attacks. Various defense methods have been proposed to against adversarial examples, which can fall into the following two categories. Adversarial Training. One popular and promising defense method is adversarial training;; ), which augments the training data by the adversarial examples in the training process. develop a successful adversarial training method, which leverages the projected gradient descent (PGD) attack to generate adversarial examples. However, this method is difficult to scale to large-scale datasets . propose ensemble adversarial training by augmenting the training data with perturbations transferred from various models, so as to further improve the robustness against the black-box attacks. Currently, adversarial training is still one of the best techniques to defend against adversarial attacks. Input Modification. The second category of defense methods aims to mitigate the effects of adversarial perturbations by modifying the input data. discover that there exists a range of image transformations, which have the potential to remove adversarial perturbations while preserving the visual information of the images. mitigate the adversarial effects through random transformations. propose high-level representation guided denoiser to purify the adversarial examples. propose a JPEG-based defensive compression framework to rectify adversarial examples without impacting classification accuracy on benign data. leverage an end-to-end image compression model to defend adversarial examples. Although these defense methods perform well in practice, they can not tell whether the model is truly robust to adversarial perturbations. use randomized smoothing to obtain an ImageNet classifier with certified adversarial robustness. Similar with the process of training neural networks, the process of generating adversarial examples can also be viewed as an optimization problem. In the optimizing phase, the white-box model being attacked to optimize the adversarial examples can be viewed as the training data on the training process. And the adversarial examples can be viewed as the training parameters of the model. Then in the testing phase, the black-box models to evaluate the adversarial examples can be viewed as the testing data of the model. From the perspective of the optimization, the transferability of the adversarial examples is similar with the generalization ability of the trained models. Thus, we can migrate the methods used to improve the generalization of models to the generation of adversarial examples, so as to improving the transferability of adversarial examples. Many methods have been proposed to improve the generalization ability of the deep learning models, which can be split to two aspects: better optimization algorithm, such as Adam optimizer; data augmentation . Correspondingly, the methods to improve the transferability of adversarial examples can also be split to two aspects: better optimization algorithm, such as MI-FGSM, which applies the idea of momentum; model augmentation (i.e., ensemble attack on multiple models), such as the work of, which considers to attack multiple models simultaneously. Based on above analysis, we aim to improve the transferability of adversarial examples by applying the idea of Nesterov accelerated gradient for optimization and using a set of scaled images to achieve model augmentation. Nesterov Accelerated Gradient (NAG) is a slight variation of normal gradient descent, which can speed up the training process and improve the convergence significantly. NAG can be viewed as an improved momentum method, which can be expressed as: Typical gradient-based iterative attacks (e.g., I-FGSM) greedily perturb the images in the direction of the sign of the gradient at each iteration, which usually falls into poor local maxima, and shows weak transferability than single-step attacks (e.g., FGSM). show that adopting momentum into attacks can stabilize the update directions, which helps to escape from poor local maxima and improve the transferability. Compared to momentum, beyond stabilize the update directions, the anticipatory update of NAG gives previous accumulated gradient a correction that helps to effectively look ahead. Such looking ahead property of NAG can help us escape from poor local maxima easier and faster, ing in the improvement on transferability. We integrate NAG into the iterative gradient-based attack to leverage the looking ahead property of NAG and build a robust adversarial attack, which we refer to as NI-FGSM (Nesterov Iterative Fast Gradient Sign Method). Specifically, we make a jump in the direction of previous accumulated gradients before computing the gradients in each iteration. Start with g 0 = 0, the update procedure of NI-FGSM can be formalized as follows: where g t denotes the accumulated gradients at the iteration t, and µ denotes the decay factor of g t. Besides considering a better optimization algorithm for the adversaries, we can also improve the transferability of adversarial examples by model augmentation. We first introduce a formal definition of loss-preserving transformation and model augmentation as follows. Definition 1 Loss-preserving Transformation. Given an input x with its ground-truth label y true and a classifier f (x): x ∈ X → y ∈ Y with the cross-entropy loss J(x, y), if there exists an input transformation T (·) that satisfies J(T (x), y true ) ≈ J(x, y true) for any x ∈ X, we say T (·) is a loss-preserving transformation. Definition 2 Model Augmentation. Given an input x with its ground-truth label y true and a model f (x): x ∈ X → y ∈ Y with the cross-entropy loss J(x, y), if there exists a loss-preserving transformation T (·), then we derive a new model by f (x) = f (T (x)) from the original model f. we define such derivation of models as model augmentation. Intuitively, similar to the generalization of models that can be improved by feeding more training data, the transferability of adversarial examples can be improved by attacking more models simultaneously. enhance the gradient-based attack by attacking an ensemble of models. However, their approach requires training a set of different models to attack, which has a large computational cost. Instead, in this work, we derive an ensemble of models from the original model by model augmentation, which is a simple way of obtaining multiple models via the loss-preserving transformation. To get the loss-preserving transformation, we discover that deep neural networks might have the scale-invariant property, besides the translation invariance. Specifically, the loss values are similar for the original and the scaled images on the same model, which is empirically validated in Section 4.2. Thus, the scale transformation can be served as a model augmentation method. Driven by the above analysis, we propose a Scale-Invariant attack Method (SIM), which optimizes the adversarial perturbations over the scale copies of the input image: arg max where S i (x) = x/2 i denotes the scale copy of the input image x with the scale factor 1/2 i, and m denotes the number of the scale copies. With SIM, instead of training a set of models to attack, we can effectively achieve ensemble attacks on multiple models by model augmentation. More importantly, it can help avoid "overfitting" on the white-box model being attacked and generate more transferable adversarial examples. For the gradient processing of crafting adversarial examples, NI-FGSM introduces a better optimization algorithm to stabilize and correct the update directions at each iteration. For the ensemble attack of crafting adversarial examples, SIM introduces model augmentation to derive multiple models to attack from a single model. Thus, NI-FGSM and SIM can be naturally combined to build a stronger attack, which we refer to as SI-NI-FGSM (Scale-Invariant Nesterov Iterative Fast Gradient Sign Method). The algorithm of SI-NI-FGSM attack is summarized in Algorithm 1. In addition, SI-NI-FGSM can be integrated with DIM (Diverse Input Method), TIM (TranslationInvariant Method) and TI-DIM (Translation-Invariant with Diverse Input Method) as SI-NI-DIM, SI-NI-TIM and SI-NI-TI-DIM, respectively, to further boost the transferability of adversarial examples. The detailed algorithms for these attack methods are provided in Appendix A. In this section, we provide experimental evidence on the advantage of the proposed methods. We first provide experimental setup, followed by the exploration of the scale-invariance property for deep learning models. We then compare the of the proposed methods with baseline methods in Section 4.3 and 4.4 on both normally trained models and adversarially trained models. Beyond the defense models based on adversarial training, we also quantify the effectiveness of the proposed methods on other advanced defense in Section 4.5. Additional discussions, the comparison between NI-FGSM and MI-FGSM and the comparison with classic attacks, are in Section 4.6. Codes are available at https://github.com/JHL-HUST/SI-NI-FGSM. Input: A clean example x with ground-truth label y true; a classifier f with loss function J; Input: Perturbation size; maximum iterations T; number of scale copies m and decay factor µ. Update g t+1 by g t+1 = µ · g t + Dataset. We randomly choose 1000 images belonging to the 1000 categories from ILSVRC 2012 validation set, which are almost correctly classified by all the testing models. For normally trained models, we consider Inception-v3 (Inc-v3) , Inception-v4 (Inc-v4), Inception-Resnet-v2 (IncRes-v2) and Resnet-v2-101 (Res-101) . For adversarially trained models, we consider Inc-v3 ens3, Inc-v3 ens4 and IncRes-v2 ens . Additionally, we include other advanced defense models: high-level representation guided denoiser (HGD), random resizing and padding (R&P) , NIPS-r3 1, feature distillation (FD) , purifying perturbations via image compression model (Comdefend) and randomized smoothing (RS) . Baselines. We integrate our methods with DIM , TIM, and TI-DIM , to show the performance improvement of SI-NI-FGSM over these baselines. Denote our SI-NI-FGSM integrated with other attacks as SI-NI-DIM, SI-NI-TIM, and SI-NI-TIM-DIM, respectively. Hyper-parameters. For the hyper-parameters, we follow the settings in with the maximum perturbation as = 16, number of iteration T = 16, and step size α = 1.6. For MI-FGSM, we adopt the default decay factor µ = 1.0. For DIM, the transformation probability is set to 0.5. For TIM, we adopt the Gaussian kernel and the size of the kernel is set to 7 × 7. For our SI-NI-FGSM, the number of scale copies is set to m = 5. To validate the scale-invariant property of deep neural networks, we randomly choose 1,000 original images from ImageNet dataset and keep the scale size in the range of [0.1, 2.0] with a step size 0.1. Then we feed the scaled images into the testing models, including Inc-v3, Inc-v4, IncRes-2, and Res-101, to get the average loss over 1,000 images. As shown in Figure 1, we can easily observe that the loss curves are smooth and stable when the scale size is in range [0.1, 1.3]. That is, the loss values are very similar for the original and scaled images. So we assume that the scale-invariant property of deep models is held within [0.1, 1.3], and we leverage the scale-invariant property to optimize the adversarial perturbations over the scale copies of the input images. In this subsection, we integrate our SI-NI-FGSM with TIM, DIM and TI-DIM, respectively, and compare the black-box attack success rates of our extensions with the baselines under single model setting. As shown in Table 1, our extension methods consistently outperform the baseline attacks by 10% ∼ 35% under the black-box setting, and achieve nearly 100% success rates under the white-box setting. It indicates that SI-NI-FGSM can serve as a powerful approach to improve the transferability of adversarial examples. Following the work of , we consider to show the performance of our methods by attacking multiple models simultaneously. Specifically, we attack an ensemble of normally trained models (including Inc-v3, Inc-v4, IncRes-v2 and Res-101) with equal ensemble weights using TIM, SI-NI-TIM, DIM, SI-NI-DIM, TI-DIM and SI-NI-TI-DIM, respectively. As shown in Table 2, our methods improve the attack success rates across all experiments over the baselines. In general, our methods consistently outperform the baseline attacks by 10% ∼ 30% under the black-box setting. Especially, SI-NI-TI-DIM, the extension by combining SI-NI-FGSM with TI-DIM, can fool the adversarially trained models with a high average success rate of 93.5%. It indicates that these advanced adversarially trained models provide little robustness guarantee under the black-box attack of SI-NI-TI-DIM. Besides normally trained models and adversarially trained models, we consider to quantify the effectiveness of our methods on other advanced defenses, including the top-3 defense solutions in the NIPS competition (high-level representation guided denoiser (HGD, rank-1), random resizing and padding (R&P, rank-2) and the rank-3 submission (NIPS-r3), and three recently proposed defense methods (feature distillation (FD) , purifying perturbations via image compression model (Comdefend) and randomized smoothing (RS) ). We compare our SI-NI-TI-DIM with MI-FGSM, which is the top-1 attack solution in the NIPS 2017 competition, and TI-DIM , which is state-of-the-art attack. We first generate adversarial examples on the ensemble models, including Inc-v3, Inc-v4, IncResv2, and Res-101 by using MI-FGSM, TI-DIM, and SI-NI-TI-DIM, respectively. Then, we evaluate the adversarial examples by attacking these defenses. As shown in Table 3, our method SI-NI-TI-DIM achieves an average attack success rate of 90.3%, surpassing state-of-the-art attacks by a large margin of 14.7%. By solely depending on the trans- ferability of adversarial examples and attacking on the normally trained models, SI-NI-TI-DIM can fool the adversarially trained models and other advanced defense mechanism, raising a new security issue for the development of more robust deep learning models. Some adversarial examples generated by SI-NI-TI-DIM are shown in Appendix B.. The adversarial examples are crafted on Inc-v3 with various number of iterations ranging from 4 to 16, and then transfer to attack Inc-v4 and IncRes-v2. As shown in Figure 2, NI-FGSM yields higher attack success rates than MI-FGSM with the same number of iterations. In another view, NI-FGSM needs fewer number of iterations to gain the same attack success rate of MI-FGSM. The not only indicate that NI-FGSM has a better transferability, but also demonstrate that with the property of looking ahead, NI-FGSM can accelerate the generation of adversarial examples. Comparison with classic attacks. We consider to make addition comparison with classic attacks, including FGSM , I-FGSM , PGD and C&W . As shown in Table 4, our methods achieve 100% attack success rate which is the same as C&W under the white-box setting, and significantly outperform other methods under the black-box setting. In this work, we propose two new attack methods, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM), to improve the transferability of adversarial examples. NI-FGSM aims to adopt Nesterov accelerated gradient method into the gradientbased attack, and SIM aims to achieve model augmentation by leveraging the scale-invariant property of models. NI-FGSM and SIM can be naturally combined to build a robust attack, namely SI-NI-FGSM. Moreover, by integrating SI-NI-FGSM with the baseline attacks, we can further improve the transferability of adversarial examples. Extensive experiments demonstrate that our methods not only yield higher success rates on adversarially trained models but also break other strong defense mechanism. Our work of NI-FGSM suggests that other momentum methods (e.g. Adam) may also be helpful to build a strong attack, which will be our future work, and the key is how to migrate the optimization method to the gradient-based iterative attack. Our work also shows that deep neural networks have the scale-invariant property, which we utilized to design the SIM to improve the attack transferability. However, it is not clear why the scale-invariant property holds. Possibly it is due to the batch normalization at each convolutional layer, that may mitigate the impact of the scale change. We will also explore the reason more thoroughly in our future work. The algorithm of SI-NI-TI-DIM attack is summarized in Algorithm 2. We can get the SI-NI-DIM attack algorithm by removing Step 10 of Algorithm 2, and get the SI-NI-TIM attack algorithm by removing T (·; p) in Step 7 of Algorithm 2. Input: A clean example x with ground-truth label y true; a classifier f with loss function J; Input: Perturbation size; maximum iterations T; number of scale copies m and decay factor µ. for i = 0 to m − 1 do sum the gradients over the scale copies of the input image 7: Get the gradients by ∇ x J(T (S i (x nes t); p), y true ) apply random resizing and padding to the inputs with the probability p Convolve the gradients by g = W * g convolve gradient with the pre-defined kernel W Update g t+1 by g t+1 = µ · g t + We visualize 12 randomly selected benign images and their corresponding adversarial images in Figure 3. The adversarial images are crafted on the ensemble models, including Inc-v3, Inc-v4, IncRes-v2 and Res-101, using the proposed SI-NI-TI-DIM. We see that these generated adversarial perturbations are human imperceptible. | We proposed a Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and a Scale-Invariant attack Method (SIM) that can boost the transferability of adversarial examples for image classification. | 1,242 | scitldr |
Low bit-width weights and activations are an effective way of combating the increasing need for both memory and compute power of Deep Neural Networks. In this work, we present a probabilistic training method for Neural Network with both binary weights and activations, called PBNet. By embracing stochasticity during training, we circumvent the need to approximate the gradient of functions for which the derivative is zero almost always, such as $\textrm{sign}(\cdot)$, while still obtaining a fully Binary Neural Network at test time. Moreover, it allows for anytime ensemble predictions for improved performance and uncertainty estimates by sampling from the weight distribution. Since all operations in a layer of the PBNet operate on random variables, we introduce stochastic versions of Batch Normalization and max pooling, which transfer well to a deterministic network at test time. We evaluate two related training methods for the PBNet: one in which activation distributions are propagated throughout the network, and one in which binary activations are sampled in each layer. Our experiments indicate that sampling the binary activations is an important element for stochastic training of binary Neural Networks. Deep Neural Networks are notorious for having vast memory and computation requirements, both during training and test/prediction time. As such, Deep Neural Networks may be unfeasible in various environments such as battery powered devices, embedded devices (because of memory requirement), on body devices (due to heat dissipation), or environments in which constrains may be imposed by a limited economical budget. Hence, there is a clear need for Neural Networks that can operate in these resource limited environments. One method for reducing the memory and computational requirements for Neural Networks is to reduce the bit-width of the parameters and activations of the Neural Network. This can be achieved either during training (e.g., BID15 ; BID0) or using post-training mechanisms (e.g., BID15, BID5). By taking the reduction of the bit-width for weights and activations to the extreme, i.e., a single bit, one obtains a Binary Neural Network. Binary Neural Networks have several advantageous properties, i.e., a 32× reduction in memory requirements and the forward pass can be implemented using XNOR operations and bit-counting, which in a 58× speedup on CPU BID20. Moreover, Binary Neural Networks are more robust to adversarial examples BID2. BID21 introduced a probabilistic training method for Neural Networks with binary weights, but allow for full precision activations. In this paper, we propose a probabilistic training method for Neural Networks with both binary weights and binary activations, which are even more memory and computation efficient. In short, obtain a closed form forward pass for probabilistic neural networks if we constrain the input and weights to binary (random) variables. The output of the Multiply and Accumulate (MAC) operations, or pre-activation, is approximated using a factorized Normal distribution. Subsequently, we introduce stochastic versions of Max-Pooling and Batch Normalization that allow us to propagate the pre-activatoins throughout a single layer. By applying the sign(·) activation function to the random pre-activation, we not only obtain a distribution over binary activations, it also allows for backpropagation through the sign(·) operation. This is especially convenient as this in a deterministic Neural Network all gradient information is zeroed out when using sign as activation. We explore two different methods for training this probabilistic binary neural network: In the first method the activation distribution of layer l is propagated to layer (l + 1), which means the MAC operation is performed on two binary random variables. In the second method the binary activation is sampled as the last operation in a layer using the concrete relaxation BID16. This can be thought of as a form of local reparametrization BID11. We call the networks obtained using these methods PBNet and PBNet-S, respectively. At test time, we obtain a single deterministic Binary Neural Network, an ensemble of Binary Neural Networks by sampling from the parameter distribution, or a Ternary Neural Network based on the Binary weight distribution. An advantage of our method is that we can take samples from the parameter distribution indefinitely-without retraining. Hence, this method allows for anytime ensemble predictions and uncertainty estimates. Note that while in this work we only consider the binary case, our method supports any discrete distribution over weights and activations. Algorithm 1: Pseudo code for forward pass of single layer in PBNet(-S). a l−1 denotes the activation of the previous layer, B the random binary weight matrix, τ is the temperature used for the concrete distribution, f (·, ·) the linear transformation used in the layer, > 0 a small constant for numerical stability, D the dimensionality of the inner product in f, and γ & β are the parameters for batch normalization. DISPLAYFORM0 // Max pooling if max pooling required then n ∼ N (0, I); s = µ + σ n; ι = max-pooling-indices(s); µ, σ 2 = select-at-indices(µ, σ 2, ι); end // Binarization and sampling DISPLAYFORM1 In this section we introduce the probabilistic setting of the PBNet. Moreover, the approximation of the distribution on the pre-activations is introduced. For an explanation of the other operations in the PBNet, see Section 2.1 for the activation, Section 2.1.1 for the sampling of activations, and Section 2.2 for Pooling and Normalization. We aim to train a probabilistic Binary Neural Network. As such, we pose a binary distribution over the weights of the network and optimize the parameters of this distribution instead of the parameters directly. This way, we obtain a distribution over parameters, but also deal with the inherent discreteness of a Binary Neural Network. Given an objective function L(·), this approach can be thought of in terms of the variational optimization framework BID23. Specifically, by optimizing the parameters of the weight distributions, we optimize a bound on the actual loss: min DISPLAYFORM2 where B are the binary weights of the network and q θ (B) is a distribution over the binary weights. For q θ (B) a slight reparametrization of the Bernoulli distribution is used, which we will refer to as the Binary distribution. This distribution is parameterized by θ ∈ [−1, 1] and is defined by: DISPLAYFORM3 For the properties of this distribution, please refer to Appendix A.We will now consider using the Binary distribution for both the weights and the activations in a Neural Network. Since the pre-activations in a Neural Network are computed using MAC operations, which are the same for each value in the pre-activation, we will only consider a single value in our discussion here. Let w ∼ Binary(θ) and h ∼ Binary(φ) be the weight and input random variable for a given layer. As such, the innerproduct between the weights and input is distributed according to a translated and scaled Poisson binomial distribution: DISPLAYFORM4 Where D is the dimensionality of h and w and denotes element-wise multiplication. See the picket fence on the top in FIG0 for an illustration of the PMF of a Poisson binomial distribution. Although the scaled and translated Poisson binomial distribution is the exact solution for the inner product between the weight and activation random variables, it is hard to work with in subsequent layers. For this reason, and the fact that the Poisson binomial distribution is well approximated by a Normal distribution , we use a Normal approximation to the Poisson binomial distribution, which allows for easier manipulations. Using the properties of the Binary distribution and the Poisson binomial distribution, the approximation for the pre-activation a is given by: DISPLAYFORM5 Note that, this is exactly the approximation one would obtain by using the Lyapunov Central Limit Theorem (CLT), which was used by BID21. This allows us to obtain a close approximation to the pre-activation distribution, which we can propagate through the layer and/or network. So far, only the MAC operation in a given layer is discussed, in Section 2.1 application of the binary activation is discussed and in Section 2.1. The stochastic versions of Batch Normalization and Max Pooling are introduced in Section 2.2. For specifics on sampling the binary activation, see Section 2.1.1. The full forward pass for a single layer is given in detail in Algorithms 1. Since the output of a linear operation using binary inputs is not restricted to be binary, it is required to apply a binarization operation to the pre-activation in order to obtain binary activations. Various works -e.g., BID7 and BID20 -use either deterministic or stochastic binarization functions, i.e., DISPLAYFORM0 +1 with probability p = sigmoid(a) −1 with probability 1 − p.In our case the pre-activations are random variables. Hence, applying a deterministic binarization function to a random pre-activations in a stochastic binary activation. Specifically, let a i ∼ N (µ i, σ 2 i) be a random pre-ctivation obtained using the normal approximation, as introduced in the previous section, then the activation (after binarization) is again given as a Binary random variable". Interestingly, the Binary probability can be computed in closed form by evaluating the probability density that lies above the binarization threshold: DISPLAYFORM1 where Φ(·|µ, σ 2) denotes the CDF of N (µ, σ 2). Applying the binarization function to a random pre-activation has two advantages. First, the derivatives ∂q i /∂µ i and ∂q i /∂σ i are not zero almost everywhere, in contrast to the derivatives of b det and b stoch when applied to a deterministic input. Second, the distribution over h i reflects the true uncertainty about the sign of the activation, given the stochastic weights, whereas b stoch uses the magnitude of the pre-activation as a substitute. For example, a pre-activation with a high positive magnitude and high variance will be deterministically mapped to 1 by b stoch. In contrast, our method takes the variance into account and correctly assigns some probability mass to −1. See FIG0 for a graphical depiction of the stochastic binary activation. So far, we have discussed propagating distributions throughout the network. Alternatively, the binary activations can be sampled using the Concrete distribution BID16 during training. specifically, we use the hard sample method as discussed by BID9. By sampling the activations, the input for subsequent layers will match the input that is observed at test time more closely. As a consequence of sampling the activation, the input to a layer is no longer a distribution but a h ∈ {−1, +1} D vector instead. As such, the normal approximation to the pre-activation is computed slightly different. From the Lyapunov CLT it follows that the approximation to the distribution of the pre-activation is given by: DISPLAYFORM0 where w ∼ Binary(θ) is a random weight. Similarly, the pre-activation of the input layer is also computed using this approximation-given a real-valued input vector. We will refer to a PBNet that uses activation sampling as PBNet-S. Other than a linear operation and an (non-linear) activation function, Batch Normalization BID8 and pooling are two popular building blocks for Convolutional Neural Networks. For Binary Neural Networks, applying Batch Normalization to a binarized activation will in a non-binary . Moreover, the application of max pooling on a binary activation will in a feature map containing mostly +1s. Hence, both operations must be applied before binarization. However, in the PBNet, the binarization operation is applied before sampling. As a consequence, the Batch Normalization and pooling operations can only be applied on random pre-activations. For this reason, we define these methods for random variables. Although there are various ways to define these operation in a stochastic fashion, our guiding principle is to only leverage stochasticity during training, i.e., at test time, the stochastic operations are replaced by their conventional implementations and parameters learned in the stochastic setting must be transferred to their deterministic counterparts. Batch Normalization (BN) BID8 -including an affine transformation -is defined as follows: DISPLAYFORM0 where a i denotes the pre-activation before BN,â the pre-activation after BN, and m & v denote the sample mean and variance of DISPLAYFORM1, for an M -dimensional pre-activation. In essence, BN translates and scales the pre-activations such that they have approximately zero mean and unit variance, followed by an affine transformation. Hence, in the stochastic case, our aim is that samples from the pre-activation distribution after BN also have approximately zero mean and unit variance-to ensure that the stochastic batch normalization can be transfered to a deterministic binary neural network. This is achieved by subtracting the population mean from each pre-activation random variable and by dividing by the population variance. However, since a i is a random variable in the PBNet, simply using the population mean and variance equations will in non-standardized output. Instead, to ensure a standardized distribution over activations, we compute the expected population mean and variance under the pre-activation distribution: DISPLAYFORM2 where M is the total number of activations and a i ∼ N (µ i, σ i) are the random pre-activations. By substituting m and v in Equation 8 by Equation 9 and 10, we obtain the following batch normalized Gaussian distributions for the pre-activations: DISPLAYFORM3 Note that this assumes a single channel, but is easily extended to 2d batch norm in a similar fashion as conventional Batch Normalization. At test time, Batch Normalization in a Binary Neural Network can be reduced to an addition and sign flip of the activation, see Appendix B for more details. In general, pooling applies an aggregation operation to a set of (spatially oriented) pre-activations. Here we discuss max pooling for stochastic pre-activations, however, similar considerations apply for other types of aggregation functions. In the case of max-pooling, given a spatial region containing stochastic pre-activations a 1,..., a K, we aim to stochastically select one of the a i. Note that, although the distribution of max(a 1, . . ., a K) is well-defined BID18, its distribution is not Gaussian and thus does not match one of the input distributions. Instead, we sample one of the input random variables in every spatial region according to the probability of that variable being greater than all other variables, i.e., ρ i = p(a i > z \i), where z \i = max({a j} j =i ). ρ i could be obtained by evaluating the CDF of (z \i − a i) at 0, but to our knowledge this has no analytical form. Alternatively, we can use Monte-Carlo integration to obtain ρ: DISPLAYFORM0 where one-hot(i) returns a K-dimensional one-hot vector with the ith elements set to one. The pooling index ι is then sampled from Cat(ρ). However, more efficiently, we can sample s ∼ p(a 1, . . ., a K) and select the index of the maximum in s, which is equivalent sampling from Cat(ρ). Hence, for a given max pooling region, it is sufficient to obtain a single sample from each normal distribution associated with each pre-activation and keep the random variable for which this sample is maximum. A graphical overview of this is given in Figure 2.Other forms of stochastic or probabilistic max pooling were introduced by BID13 , however, in both cases a single activation is sampled based on the magnitude of the activations. In contrast, in our procedure we stochastically propagate one of the input distributions over activations. For the PBNet the parameters θ for q θ (B) are initialized from a uniform U (−1, 1) distribution. Although the final parameter distribution more closely follows a Beta(α, α) distribution, for α < 1, we did not observe any significant impact choosing another initialization method for the PBNet. In the case of the PBNet-S, we observed a significant improvement in training speed and performance by initializing the parameters based on the parameters of a pre-trained full precission Neural Network. This initializes the convolutional filters with more structure than a random initialization. This is desirable as in order to flip the value of a weight, the parameter governing the weight has to pass through a high variance regime, which can slow down convergence considerably. Select maximum per region 2 Sample from input distributions 1Keep maximum distribution for each region 3Figure 2: Max pooling for random variables is performed by taking a single sample from each of the input distributions. The output random variable for each pooling region is the random variable that is associated with the maximum sample. For the PBNet-S, We use the weight transfer method introduced by BID21 in which the parameters of the weight distribution for each layer are initialized such that the expected value of the random weights equals the full precision weight divided by the standard deviation of the weights in the given layer. Since not all rescaled weights lay in the [−1, 1] range, all binary weight parameters are clipped between [−0.9, 0.9]. This transfer method transfers the structure present in the filters of the full precision network and ensures that a significant part of the parameter distributions is initialized with low variance. In our training procedure, a stochastic neural network is trained. However, at test time (or on hardware) we want to leverage all the advantages of a full binary Neural Network. Therefore, we obtain a deterministic binary Neural Network from the parameter distribution q θ (B) at test time. We consider three approaches for obtaining a deterministic network: a deterministic network based on the mode of q θ (B) called PBNET-MAP, an ensemble of binary Neural Networks sampled from q θ (B) named PBNET-x, and a ternary Neural Network (PBNET-TERNARY), in which a single parameter W i may be set to zero based on q θ, i.e.: DISPLAYFORM0 The ternary network can also be viewed as a sparse PBNet, however, sparse memory look-ups may slow down inference. Note that, even when using multiple binary neural networks in an ensemble, the ensemble is still more efficient in terms of computation and memory when compared to a full precision alternative. Moreover, it allows for anytime ensemble predictions for improved performance and uncertainty estimates by sampling from the weight distribution. Since the trained weight distribution is not fully deterministic, the sampling of individual weight instantiations will in a shift of the batch statistics. As a consequence, the learned batch norm statistics no longer closely match the true statistics. This is alleviated by re-estimating the batch norm statistics based on (a subset of) the training set after weight sampling using a moving mean and variance estimator. We observed competitive using as little as 20 batches from the training set. Binary and low precision neural networks have received significant interest in recent years. Most similar to our work, in terms of the final neural network, is the work on Binarized Neural Networks by BID7. in this work a real-valued shadow weight is used and binary weights are obtained by binarizing the shadow weights. Similarly the pre-activations are binarized using the same binarization function. In order to back-propagate through the binarization operation the straightthrough estimator BID6 ) is used. Several extensions to Binarized Neural Networks have been proposed which -more or less -qualify as binary neural networks: XNOR-net BID20 in which the real-valued parameter tensor and activation tensor is approximated by a binary tensor and a scaling factor per channel. ABC-nets take this approach one step further and approximate the weight tensor by a linear combination of binary tensors. Both of these approaches perform the linear operations in the forward pass using binary weights and/or binary activations, followed by a scaling or linear combination of the pre-activations. , similar methods to BID7 are used to binarize a wide resnet to obtain on ImageNet very close to the full precision performance. Another method for training binary neural networks is Expectation Backpropagation BID22 in which the central limit theorem and online expectation propagation is used to find an approximate posterior. This method is similar in spirit to ours, but the training method is completely different. Most related to our work is the work by BID21 which use the local reparametrization trick to train a Neural Network with binary weights and the work by BID1 which also discuss a binary Neural Network in which the activation distribution are propagated through the network. Moreover, in the CLT was used to approximate dropout noise during training in order to speed up training, however, there is no aim to learn binary (or discrete) weights or use binary activations in this work. We evaluate the PBNet on the MNIST and CIFAR-10 benchmarks and compare the to Binarized Neural Networks BID7, since the architectures of the deterministic networks obtained by training the PBNet are equivalent. The PBNets are trained using either a cross-entropy (CE) loss or a binary cross entropy for each class (BCE). For the CE loss there is no binarization step in the final layer, instead the mean of the Gaussian approximation is used as the input to a softmax layer. For BCE, there is a binarization step, and we treat the probability of the ith output being +1 as the probability of the input belonging to the ith class. Specifically, for an output vector p ∈ C for C classes and the true class y, the BCE loss for a single sample is defined as DISPLAYFORM0 The weights for the PBNet-S networks are initialized using the transfer method described in Section 2.3 and the PBNets are initialized using a uniform initialization scheme. All models are optimized using Adam and a validation loss plateau learning rate decay scheme. We keep the temperature for the binary concrete distribution static at 1.0 during training. For all settings, we optimize model parameters until convergence, after which the best model is selected based on a validation set. Our code is implemented using PyTorch BID19.For Binarized Neural Networks we use the training procedure described by BID7, i.e., a squared hinge loss and layer specific learning rates that are determined based on the Glorot initialization method BID4.Experimental details specific to datasets are given in Appendix C and the are presented in TAB0. We report both test set accuracy obtained after binarizing the network as well as the the test set accuracy obtained by the stochastic network during training (i.e., by propagating activation distributions). As presented in TAB0 the accuracy improves when using an ensemble. Moreover, the predictions of the ensemble members can be used to obtain an estimate of the certainty of the ensemble as a whole. BID7, PBNet, and a full precission network (FPNet). PBNet-map refers to a deterministic PBNet using the map estimate, PBNet-Ternary is a ternary deterministic network obtained from q θ, and PBNet-X refers to an ensemble of X networks, each sampled from the same weight distribution. For the ensemble both mean and standard deviation are presented. The propagate column contains obtained using the stochastic network whereas in the binarized column are obtained using a deterministic binary Neural Network. To evaluate this, we plot an error-coverage curve BID3 in FIG2. This curve is obtained by sorting the samples according to a statistic and computing the error percentage in the top x% of the samples -according to the statistic. For the Binarized Neural Network and PBNet-MAP the highest softmax score is used, whereas for the ensembles the variance in the prediction of the top class is used. The figure suggests that the ensemble variance is a better estimator of network certainty, and moreover, the estimation improves as the ensemble sizes increases. As discussed in Section 2.4, after sampling the parameters of a deterministic network the batch statistics used by Batch Normalization must be re-estimated. FIG2 shows the obtained using a various number of batches from the training set to re-estimate the statistics. This shows that even a small number of samples is sufficient to estimate the statistics. We perform an ablation study on both the use of (stochastic) Batch Normalization and the use of weight transfer for the PBNet-S on CIFAR-10. For Batch Normalization, we removed all batch normalization layers from the PBNet-S and retrained the model on CIFAR-10. This ed in a test set accuracy of 79.21%. For the weight initialization experiment, the PBNet-S weights are initialized using a uniform initialization scheme and is trained on CIFAR-10, ing in a test set accuracy of 83.61%. Moreover, the accuracy on the validation set during training is presented in FIG2. Note that these numbers are obtained without sampling a binarized network from the weight distribution, i.e., local reparametrization and binary activation samples are used. The PBNet-S that uses both weight transfer and stochastic Batch Normalization in a significant performance improvement, indicating that both stochastic Batch Normalization and weight transfer are necessary components for the PBNet-S. The of our experiments show that, following our training procedure, sampling of the binary activations is a necessary component. Although the stochastic PBNet generalizes well to unseen data, there is a significant drop in test accuracy when a binary Neural Network is obtained from the stochastic PBNet. In contrast, this performance drop is not observed for PBNet-S. A potential explanation of this phenomenon is that by sampling the binary activation during training, the network is forced to become more robust to the inherent binarization noise that is present at test time of the binarized Neural Network. If this is the case, then sampling the binary activation can be thought of as a regularization strategy that prepares the network for a more noisy binary setting. However, other regularization strategies may also exist. We have presented a stochastic method for training Binary Neural Networks. The method is evaluated on multiple standardized benchmarks and reached competitive . The PBNet has various advantageous properties as a of the training method. The weight distribution allows one to generate ensembles online which in improved accuracy and better uncertainty estimations. Moreover, the Bayesian formulation of the PBNet allows for further pruning of the network, which we leave as future work. A BINARY DISTRIBUTION For convenience, we have introduced the Binary distribution in this paper. In this appendix we list some of the properties used in the paper, which all follow direcly from the properties of the Bernoulli distribution. The Binary distribution is a reparametrization of the Bernoulli distribution such that: DISPLAYFORM0 This gives the following probability mass function: DISPLAYFORM1 where a ∈ {−1, +1} and θ ∈ [−1, 1]. From this, the mean and variance are easily computed: DISPLAYFORM2 Finally, let b ∼ Binary(φ), then ab ∼ Binary(θφ). During training the PBNet is trained using stochastic Batch Normalization. At test time, the parameters learned using stochastic Batch Normalization can be transferred to a conventional Batch Normalization implementation. Alternatively, Batch Normalization can be reduced to an (integer) addition and multiplication by ±1 after applying the sign activation function. Given a pre-activation a, the application of Batch Normalization followed by a sign binarization function can be rewritten as: DISPLAYFORM0 DISPLAYFORM1 when a ∈ Z, which is the case for all but the first layer Note that we have used sign = b det = +1 here, as we have used everywhere in order to use sign as a binarization function. The MNIST dataset consists of of 60K training and 10K test 28×28 grayscale handwritten digit images, divided over 10 classes. The images are pre-processed by subtracting the global pixel mean and dividing by the global pixel standard deviation. No other form of pre-processing or data augmentation is used. For MNIST, we use the following architecture: DISPLAYFORM0 where XC3 denotes a binary convolutional layer using 3 × 3 filters and X output channels, Y FC denotes a fully connected layer with Y output neurons, SM10 denotes a softmax layer with 10 outputs, and MP2 denotes 2 × 2 (stochastic) max pooling with stride 2. Note that if a convolutional layer is followed by a max pooling layer, the binarization is only performed after max pooling. All layers are followed by (stochastic) batch normalization and binarization of the activations. We use a batchsize of 128 and an initial learning rate of 10 −2 Results are reported in TAB0. The CIFAR-10 (BID12) dataset consists of 50K training and 10K test 32 × 32 RGB images divided over 10 classes. The last 5,000 images from the training set are used as validation set. Tthe images are only pre-processed by subtracting the channel-wise mean and dividing by the standard deviation. We use the following architecture for our CIFAR-10 experiment (following BID21): DISPLAYFORM0 where we use the same notation as in the previous section. The Binarized Neural Network baseline uses the same architecture, except for one extra 1024 neuron fully connected layer. During training, the training set is augmented using random 0px to 4px translations and random horizontal fl Results are reported in TAB0. | We introduce a stochastic training method for training Binary Neural Network with both binary weights and activations. | 1,243 | scitldr |
The goal of generative models is to model the underlying data distribution of a sample based dataset. Our intuition is that an accurate model should in principle also include the sample based dataset as part of its induced probability distribution. To investigate this, we look at fully trained generative models using the Generative Adversarial Networks (GAN) framework and analyze the ing generator on its ability to memorize the dataset. Further, we show that the size of the initial latent space is paramount to allow for an accurate reconstruction of the training data. This gives us a link to compression theory, where Autoencoders (AE) are used to lower bound the reconstruction capabilities of our generative model. Here, we observe similar to the perception-distortion tradeoff . Given a small latent space, the AE produces low quality and the GAN produces high quality outputs from a perceptual viewpoint. In contrast, the distortion error is smaller for the AE. By increasing the dimensionality of the latent space the distortion decreases for both models, but the perceptual quality only increases for the AE. Generative Adversarial Networks (GANs) were introduced by for the purpose of generative modelling. Since then this framework has been successfully applied to works in style transfer by , superresolution by and semi-supervised learning by , but what GANs actually learn is still poorly understood as has been noted by. Recently, GANs have been used to solve inverse problems, where it was tried to use the generated manifold to solve an auxiliary task like image completion , MRI reconstruction or anomaly detection . For those applications, it is necessary to know if the generator NN actually describes the distribution well. Related works have shown that faithfully reconstructing the images from a generator network is not trivial . The original convergence proof by assumes that the generator and discriminator Neural Networks (NNs) have infinite capacity and they showed that the discriminator network models the Jensen-Shannon divergence between the probability distribution induced by the generator and the real data distribution. Others have adapted this paradigm and devised loss functions which have been shown to converge to other divergences or distances on the underlying probability distributions;; ). Regularization techniques like Gradient Penalty and Spectral Norm did improve the stability of the training process ) but it is still unclear how well this NNs actually approximate such distances even for trivial problems . Additionally, it is not at all clear how the generated distribution or the actual target distribution look like. used the birthday paradox to empirically gauge the size of the support of the generated distribution. GANs are used to transform a well understood low dimensional distribution (in practice either gaussian or uniform) to a high dimensional unknown distribution by playing a min-max game between two NNs. This paper is based around the intuition, that an estimated probability distribution from a dataset X has high precision if a high percentage of the actual data samples are included in the estimated distribution. To have a sense of what an adequate capacity for a generator network is, we use AE to reconstruct the dataset first. This work relies on the assumption that it is easier to reconstruct the data samples alone, than to reconstruct the entire manifold and Section 5 shows empirical evidence for this. Based on our intuition the manifold consists of the data samples and imposes additional structure between the data samples. In contrast by just reproducing the data samples, no such additional restrictions are given, making the problem strictly simpler. AEs can be trained rapidly and have been researched in detail for a long time . In contrast, trying to do a hyperparameter search on the GAN networks themselves gives rise to all kinds of problems, like instabilities in the training process, random failures and dependence on random seeds for their performance ). Hence, our contributions are as follows: • An investigation of the impact of the dimensionality of the latent space on the generated manifold. We showcase that the fit of the data depends heavily on the latent space. We also show similar thereby to the perception-distortion tradeoff , where with a small dimension for the latent space, the GAN optimizes for perception and the AE optimizes for distortion. • Relating the GAN problem to a compression task and furthermore using compression tools via deep learning to produce a lower bound for a dataset dependent suitable dimensionality of the latent space. • An investigation of the generated manifold and the limitations thereof to produce shifted or noisy images and how this relates to the size of the latent space and overfitting of the generative model. The remainder of this paper is organized as follows. Section 2 shows the related work. Then in Section 3 we revisit the theory behind pseudo inverting NNs and we explain our methodology in Section 4. In Section 5 the are shown. Section 6 draws based on those . GANs have been invented by for the task of image synthesis and semisupervised learning. The initial formulation has been shown to suffer from mode collapse and vanishing gradient problems and multiple other loss functions have been proposed (; ; . We choose the Wasserstein GAN, due to it using the L2-distance as its transport cost function to judge the quality of the generated manifold, which is similar to the Mean Squared Error (MSE) used in the classical AE . To satisfy the Lipschitz constraint we use , due to its good performance in review works; ). For evaluation purposes we need a quality metrics for GAN and the most common ones rely on the usage of classification models, which are trained on ImageNet . Two commonly used metrics are the Inception Score and the Fréchet Inception Distance (FID) . To combat the bias of having trained on proposed to train a new classifier to evaluate the GAN model, but this is time consuming and prone to errors by not having a standardized metric. We opted for the FID score, due to its prevalence in related works; ). AEs have been studied extensively in the past and initially have been proposed by. They have found usage in compression literature (Cheng et al. (2018a; b); ). Their first usage in a GAN framework has been shown in the Energy based GAN by. Since then they have been used as preprocessing to reduce the dimensionality of the data , as regularizers or as generative models themselves . The inversion of generator networks has been initially used by and there have been multiple works on how to improve their method by adding heuristics like stochastic clipping on top of that. We opted against stochastic clipping, because we already project back to the likely set of candidates. Stochastic clipping would add even more randomness to the . Generalization of GANs through the detection of overfitting has been proposed recently by. Therein the authors are mostly concerned (f) CelebA: dim(z) = 5000 Figure 1: Histogram of the values in the weight matrix and the convolutional layers of a trained WGAN-GP with a latent space specific dimensionality for CIFAR-10 and CelebA datasets. Notice, that the weights share the same characteristics as if they were drawn from a normal distribution. with detecting memorisation of the training set in the manifold and avoid said memorisation to generalize. In contrast we make the assumption that an estimated probability distribution based on samples should include those samples and overfitting is detected based on a hold out set as well as the ability to reconstruct that hold out set as well as the training set. Additionally, we show that the ability of the GAN to memorize depends mostly on the dimensionality of the latent space and the capacity of the generator network and can be estimated using compression techniques. In this work, we want to show the fit of the distribution of trained GAN newtworks using inversions of such networks. This motivates the question of " When are NNs invertible?". Due to the nonconvex and non-bijective nature of NNs using ReLU activations invertibility is not given in the general case. The function induced by the NN is in general neither injective nor surjective and it is trivial to construct such cases. However, if the weights are gaussian distributed, proofs exist, which guarantee that the NN is invertible with a high probability. For this, we revisit the inversion theory of NNs largely pioneered by. They empirically showed that the weights of trained NNs actually exhibit random like properties, enabling the usage of Random Matrix Theory (RMT) to investigate such networks. Theorem 1 in proves that we can invert 2-layer NNs with high probability. In their work they empirically showed that this also holds for multi layer networks. In Figure 1 we show that the same holds true for trained GAN networks. Note, that this property holds across datasets and latent spaces used to train those networks. Glorot initialization was used to initialize these networks. Therefore, the standard deviation used for the initialization process differs for each layer. However, as is demonstrated in Fig. 1, all layers converged to a similar gaussian distribution. Another indicator, that the optimization process is likely to succeed is given by a trick proposed by for classification networks. It allows us to visualize the loss surface of the optimization problem for one sample at a time. For a particular data point z *, we chose two orthogonal vectors α, β and plot the following 2D function: We chose α = z * − z 0 to model the optimization path towards the original data point. For this experiment we do not need to run an optimization, because the point is to show that it is possible There is a clear optimization path from z 0 = 0 to a point z * even in this simplified plot, empirically validating the claims of , that an accurate reconstruction of the generator manifold is possible works using first order methods. to recover points from the manifold. To ensure that β is an orthogonal vector, we simply chose this vector by setting β = 0 and then. Then α, β are orthogonal and we can use them as a basis for the plots shown in Fig. 1. For visualization purposes, we scaled β to have the same norm as α; e.g. ||β|| 2 = ||α|| 2. We vary a, b ∈ [−2, 2]. In our experiments this always ed in benign structures as is shown in Fig. 2. If it is trivial to recover points from the manifold and impossible to recover the training images, than this implies that the manifold does not include the training images. Currently, designing new GAN networks works by running a huge hyperparameter search on a plausible set of combinations; ). noted that the progress in GANs has actually been quite minimal and vastly overstated, and depends on the hyperparameters more than on the improvements in the methodology, while did notice that non-saturating loss in combiniation with spectral norm produced consistently good , but never the best ones. The goal of the GAN framework is to encapsulate the high dimensional image distribution in a lower dimensional noise distribution, whereas a generator network maps the lower dimensional distribution to the former high dimensional image distribution . This implies that we compress the larger distribution onto the simple lower dimensional distribution and use the generator network as an approximation for the decompression. We rely on the assumption, that less capacity is needed to reconstruct the training set, than to reconstruct the entire manifold. Therefore, to gain some insight into the necessary capacity of the generator network, the usage of an AE is proposed to gauge the number of parameters and the size of the latent space to enable suitable approximation of the training set. In theory, a latent dimension of 1 would be sufficient to reconstruct the entire dataset, by mapping a curve to the datapoints. A simple smooth function that does this for a dataset {x 1, .., x n} = X ∈ R n×d and {p 1, ..., p n} = P ∈ R n×1 looks like this: This function will output the corresponding image x i for every p i. In practice, this behavior is not observed. AEs with small latent spaces tend to output blurry approximations of the training images as is shown in Section 5. Once a suitable AE for a dataset is found, the decoder part of the AE is used as the generative model of choice. The AE is trained using the L2-loss, which also is the transport cost function used in WGANs ) to gauge the quality of the generated manifold. Therefore, the objective function of the AE using a decoder d and an encoder e is defined as follows: ||d(e(x)) − x|| 2 We do not opt for any regularization, because we want to fit the data as closely as possible to determine if the generator capacity is adequate for reconstructing the dataset. In this work we show empirically in Sec. 5 that if the AE already fails to produce crisp , then also the GAN algorithm fails at having the dataset included in the generated manifold. One thing to note is that, while the encoder part is used as our discriminator, it has a completely different objectives and it is unclear, if a better encoder is also a better discriminator. The key part of this work is to analyse the ability of the GAN to memorise the training dataset, following the assumption, that a probablity distribution based on a sample dataset has to include that dataset. To our best knowledge there is no analytical way to invert generative NNs. Therefore this ill posed problem is stated as an inverse optimzation problem for a function g: This is solved using a standard first order gradient method. However, as is shown in Section 5 the ing latent vectors are unlikely to stem from the targeted noise distribution, if there are no constraints added. Therefore, the search space for z is restricted to the space of plausible latent vectors. Plausibility in this context means that the probability of the latent vector originating from this space is larger than 99.7% (3-sigma-rule). In the standard multivariate gaussian distribution the norm of the vector z corresponds inversely to the likelihood of z ∈ N (0, I). Therefore we restrict the norm to be ||z|| ≤ E[||z||] + 3 V AR [||z||]. For the multivariate gaussian distribution this is easy to calculate and the final objective function is given as follows: This is solved using a projected ADAM optimizer . Our experiments are done on medium sized datasets (CIFAR-10 and CelebA (64 × 64 resolution)) and the proof of concept on MNIST is relegated to the supplementary material. The NN adapted for this task is the standard DCGAN from. Convolutional layers and resize operations are used instead of transposed convolutions, to avoid checkerboard artifacts . The full hyperparameter setting is shown in the supplementary material. The visual quality measured by the FID of the ing samples is on par with the survey work by. This work is not aimed at improving the visual quality of the samples generated by GAN algorithms. For CIFAR-10 the test set is used as our validation set and for CelebA we randomly split off 20% of the dataset to be used as validation set. In principle, getting from an AE to a GAN is just a rearrangement of the NNs. For the GAN training, the default hyperparameter are used as in. The AE NNs are optimized using the ADAM optimizer with default hyperparameters for all the datasets, always on a single GPU. A comparison of the ing reconstruction errors is shown in Fig. 3. Therein the different AE and GAN models only differ in their latent space sizes. Increasing the latent space dimension also increases the number of parameters. However, for the sake of simplicity this was neglected. The AE network acts as a lower bound for the GAN algorithm, therefore validating our intuition that the AE complexity lower-bounds the GAN complexity. The visual quality of the generated images is evaluated using the FID versus the reconstructed images by an AE for the same latent space sizes. The experimental are shown in Fig. 4 for the CIFAR and CelebA datasets and demonstrate, that the visual quality of the GAN generated images barely changes with different latent dimensionality. In contrast, the AE reconstructions gain quality and sharpness as the latent space increases. This reflects the output of those models shown in Fig. 5, where the ing images are clearly an approximation of the actual training images. On the other hand, the GAN reconstructions have similar visual quality independent of the actual latent space. However, the ing images are only remotely similar to the actual image for insufficiently large latent spaces. This phenomenon has been observed in other works, where a similar looking person to an actual training image is found by the inversion procedure . However, we show in Fig. 5, that this phenomenon depends mostly on the size of the latent space. curve for different latent spaces (lower is better). Increasing or decreasing the latent space does not in a noticeable difference in the quality metric for the WGAN-GP, but it does for the AE. While it is possible to reconstruct the dataset accurately using a large enough latent space, in this section we look at what else those models can reconstruct. In these experiments, our baseline model The first columns show the original images. As we increase the latent space of the autoencoder the image quality increasing. The GAN reconstructions are of high perceptual quality even for low dimensional spaces, but correspond to other people as has been observed by. is a WGAN-GP trained on CelebA. The first experiment shown in Fig. 6 uses translated training images. This is especially challenging for this dataset, due to the pre-aligned face images used in the CelebA dataset. As is shown in Fig. 6 the error goes up and the visual quality of the reconstructions deteriorates even for small pixel shifts as long as the latent space is small enough. Figure 6: Impact of translation on the reconstruction ability of GAN networks. Notice, that by increasing latent space dimension, the generator network is able to reconstruct translations with a similar precision as the original images. The second experiment is on reconstructing the CIFAR-10 dataset using the CelebA trained model. To provide the same resolution, the CIFAR-10 images were upscaled to 64 × 64 using bilinear interpolation. If the latent space is large enough, we can reconstruct the images well, as shown in Fig. 7. The reconstruction of the CelebA images is always worse as measured by the MSE (Fig. 7). The final experiment is on reconstructing noise images. For this a dataset of uniform noise images with size 64 × 64 is used. The are shown in Fig. 8. As is suggested by the , it is in fact not possible to reconstruct everything. One thing to note though is that with lower latent spaces some face structure is visible in the reconstructed images, showing that those models have a strong prior on the face structure. We noticed and the corresponding experiment is shown in the supplementary material, that the norm of the latent vector is smaller than of random vectors. Our hypothesis is that those images are the base face features, which are refined with larger latent vectors, but this guess needs further validation work. In this work, we show that by reducing the problem to a compression task, we can give a lower bound on the required capacity and latent space dimensionality of the generator network for the distribution estimation task. Relating these two different algorithms to each other, the literature surrounding AEs for invertability and dimensionality reduction, as well as the corresponding theoretical advancements are used. While in this work the encoder and the discriminator NNs use the same architecture, we have not discovered any relation between them. Still, the same architecture works well empirically for both task. Using our framework we show various properties of generator networks. The perceptual image quality appears to be independent of the actual size of the latent space, which is in contrast to AE, where the visual quality improves if the dimensionality of the latent space is increased. However, the ability to reconstruct the training set correctly does depend on the initial latent space. Also the ability of the generator to reconstruct deviations from the original dataset, like a validation set or shifted images depends just as much on the initial latent space. However, the same cannot be said for reconstructing arbitrary noise images. Here the reconstruction ability is independent of the initial latent space unless it is chosen very large, suggesting that the generator has learned realistic natural image features. Here for smaller latent spaces we can still observe face like features. Our hypothesis is that the implicit bias induced by the generator architecture lends itself to generating natural images and GANs are skilled at that by learning primitives which can be combined to construct arbitrary images. In future works we want to use our setup to search towards better and more reliable generators for images. For our experiments a standard CNN was used as is described in Table 1. All the convolutions use 5 × 5 kernels. For the CIFAR-10 experiments we removed one of the up and downlayers and started with 1024 feature maps. We used the ADAM optimizer with learning rate 1E − 4 and β 1 = 0. and β 2 = 0.9 similar to the original WGAN-GP as described by. We trained for 80.000 generator iterations and 5 discriminator iterations per generator update for all of our networks. We also trained the AE for 80.000 iterations with ADAM using the default hyperparameters. The inversion of the generator networks was done using 10.000 ADAM steps using default hyperparameters. We found that the changes afterwards were negligible. The initial experiments on MNIST validating the legitimacy of the approach are shown in Fig. 9. Instead of using a visual quality metric to determine if the ing images are similar enough, we used the reconstructed training images of the GAN for training a LeNet network to determine if the images are descriptive. The manifold of MNIST has been faithfully learned before by. In contrast to their work, we show that this only works given a large enough latent space and that the reconstruction error of the AE and the WGAN-GP converge for larger latent spaces. In this section the main question is if the found reconstructions likely to appear in the generative model. For this reason the inverse optimization procedure is run without the projection onto the feasible set for dim(z) = 1000 on CIFAR-10. The ing latent vector norm of the reconstructions is shown in Fig 10. The corresponding set of vectors, which is likely to appear in the training is shown in blue, while the training set reconstructions are shown in red and the test set reconstructions are shown in green. As is apparent therein, the overlap is virtually non-existant and this is the reason why we used a projected gradient method. We also investigated the difference between the ability of GAN to reconstruct images of CIFAR-10 while trained on CelebA compared to AE. The are shown in Fig. 11 and as is already shown in Fig. 7 the GAN algorithm is better at reconstructing CIFAR-10 images than CelebA images. However, this is not the case for the AE. And as a final experiment we show visual on CIFAR-10 images produced by an inverted GAN and by an AE in Fig. 12. As in the main paper we can the same behavior again. The AE produces initially blurry and then sharp ones and the GAN produces sharp of unrelated images and then produces images which are closer to the actual images. The first column are the actual images. As we increase the latent space of the autoencoder we can see the image quality increasing. The same cannot be said about the GAN reconstructions. Those are of high quality even for low dimensional space, but correspond to other people. | We analyze the impact of the latent space of fully trained generators by pseudo inverting them. | 1,244 | scitldr |
We address the problem of open-set authorship verification, a classification task that consists of attributing texts of unknown authorship to a given author when the unknown documents in the test set are excluded from the training set. We present an end-to-end model-building process that is universally applicable to a wide variety of corpora with little to no modification or fine-tuning. It relies on transfer learning of a deep language model and uses a generative adversarial network and a number of text augmentation techniques to improve the model's generalization ability. The language model encodes documents of known and unknown authorship into a domain-invariant space, aligning document pairs as input to the classifier, while keeping them separate. The ing embeddings are used to train to an ensemble of recurrent and quasi-recurrent neural networks. The entire pipeline is bidirectional; forward and backward pass are averaged. We perform experiments on four traditional authorship verification datasets, a collection of machine learning papers mined from the web, and a large Amazon-Reviews dataset. Experimental surpass baseline and current state-of-the-art techniques, validating the proposed approach. We investigate the applicability of transfer learning techniques to Authorship Verification (AV) problems, and propose a a method that uses some of the most recent advances in deep learning to achieve state of the art on a variety of datasets. AV seeks to determine whether two or more text documents have been written by the same author. Some applications of AV include plagiarism analysis, sock-puppet detection, blackmailing, and email spoofing prevention BID7. Traditionally, studies on AV consider a closed and limited set of authors, and a closed set of documents written by such authors. During the training step, some of these documents (sometimes as long as a novel) are used. The goal can be formulated as to successfully identify whether the authors of a pair of documents are identical BID14 BID19 BID11. This type of AV tasks assumes access to the writing samples of all possible authors during the training step, which is not realistic. Recently, the AV problem has changed to reflect realistic -and more challenging-scenarios. The goal is no longer to individually learn the writing style of the authors (like in traditional AV methods), but to learn what differentiates two different authors within a corpus. This task involves predicting authorship of documents that may not have been previously encountered within the training set; in fact, the presence of the authors in the training data is not guaranteed either. That is, the test set may contain out of training sample data; given a set of authors of unknown papers contained within the training data, A unknown train, and a set of authors of unknown papers in the test data, A unknown test, it is neither unreasonable nor unexpected to find that A unknown train ∩A unknown test = ∅. Some other challenges arise in modern AV tasks, making authorship verification of a given pair of documents hard to infer. One is the lack of training data, which can manifest itself in any one or more of the following: the training set may be small, samples of available writings may be limited, or the length of the given documents may be insufficient. Another is the test and train documents belonging to different genre and/or topics, both within their respective sets as well as between the train and the test set -implying they were likely drawn from different distributions. The challenge is to ensure robustness in a multitude of possible scenarios. Regardless of the AV problem specifics, generally we assume a training dataset made of sets of triples: DISPLAYFORM0 with x i X known, x j X unknown a realization from random variables X known and X unknown, and the label y i,j Y is drawn from a random variable Y, producing a total of P sets of realizations, each potentially by a different author, thus forming up to P source domains, because it can be argued that a collection of literary works by one author forms a latent domain of it's own. The goal is to learn a prediction function f: X → Y that can generalize well and make accurate predictions regarding documents written by authors both inside and outside of the training set, even if those documents were not seen in training. Less formally, in AV the task is composed of multiple sub-problems: for each given sub-set of texts, we are provided one or more documents that need to be verified and one or more that are known to be of identical authorship. We approach the AV problem by designing a straightforward deep document classification model that relies on transfer learning a deep language model, ensembles, an adversary, differential learning rates, and data augmentation. In order to ensure the design's versatility and robustness, we perform authorship verification on a collection of datasets that have little in common in terms of size, distribution, origins, and manner they were designed. For evaluation, we consider standard AV corpora with minimal amount of training data, PAN-2013 BID12, PAN-2014E and PAN-2014N BID27, PAN-2015 BID28, a collection of scientific papers mined from the web BID2, and Amazon Reviews dataset BID8. The proposed approach performs well in all scenarios with no specific modifications and minimal fine-tuning, defeating all baselines, PAN competition winners, as well as the recent Transformation Encoder and PRNN models that were recently shown to perform well on AV tasks. BID8. Our method consists of three major components: augmentation, transfer learning, and the training/testing process itself. At a high level, we augment the data, fine-tune a deep LSTM-based language model (LM) known as ULMFit BID9 on the augmented training set, train an ensemble of RNN and QRNN classifiers with the encoding produced by the LM forward and backward, and evaluate the test data while performing test-time data augmentation. We utilize various data augmentation techniques in order to improve model generalization TAB0. They broadly fall into two categories, document manipulation and adversarial noise injection with the LM. In addition, most of these techniques can be applied to the test set documents during evaluation; however, some do more harm than good when used in such manner. Noise injection is performed by a 5-layer LSTM model that was pre-trained on Wikipedia and fine-tuned on our data. In our setup, it acts as a generator with a 3-layer RNN classifier working as a critic. Adversarial loss function is a weighted average of the two losses DISPLAYFORM0 where g is the LM, f is an RNN and h is the linear classifier trained on RNN's average then max pooled and flattened 2 top layers. We use a weighted average because the nature of loss functions is very different. To improve quality of augmentation, we devised the following approach (Algorithm 1). Given a training set consisting of a number of problems, with each problem containing one or more documents known to be written by the same author and a single document of unknown authorship, we cycle through each problem in the training set. If for a given problem the ground truth answer is positive, we train on all documents and try injecting noise. If the critic can tell the fake, it means our new document is most likely too different from actual ones by this author to be of any use; we then try training some more, and inject shorter sentences and less of them. The process continues until critic is fooled or generator diverges -an unlikely event because critic is not hard to fool. We hypothesize that documents form latent domains of their own based on various linguistic characteristics, making it beneficial to transform the pairs of documents into a domain-invariant space. Documents forming latent domains means that authorship verification is a separate but similar task for each domain. We cannot exploit the similarity between tasks directly because the data distributions are different, and not accounting for that while building a model would violate basic principles of machine learning BID21. Domain Adaptation (DA), a subset of Transfer learning, addresses such Algorithm 1: Noise injection algorithm using language model with an adversary problems by establishing knowledge transfer from a labeled source domain to an unlabeled (or partially labeled) target domain, by exploring domain-invariant features or invariants which transfer across domains BID22 BID21 BID5 BID29, or by embedding the data into domain-invariant subspace. Another issue that we must address comes from the nature of the data. As the documents come in pairs, they are not readily suitable for standard classifiers. A naive approach of concatenation produces poor , and various distance function schema suitable for most linear models are not very suitable for RNNs. To address these problems we utilize a deep language model that produces an encoder capable of producing an embedding representing a pair of documents. It also alleviates the need for data by being pre-trained on a large set of Wikipedia articles BID9. The domain discrepancy issue is in part mitigated too, because the ing embedding subspace features are more invariant. In a gist, our model (FIG0) is a bi-directional pipeline of recurrent neural networks. It is built on top of a pre-trained 5-layer LSTM model and takes it's last 3 (2 intermediate hidden ones and the final embedding output) layers as inputs by pooling them together. We use an ensemble of sequence classifiers, one based on an RNN and the other using a QRNN BID1, a recent addition to the RNN family that combines some properties of recurrent and convolutional networks. Both are 3-layer models with the last 2 layers average then max pooled and passed through a ReLU non-linearity and then to logit units. We output probabilities rather than labels. The predictions made by RNN and QRNN are averaged. In taking advantage of improved generalization through making the model bi-directional, we faced two challenges. First, the pre-trained LSTM model we used is uni-directional. Second, QRNN design used in this paper does not support bi-directional training, either. We circumvented the issue by tokenizing and numericalizing the text data and first training in regular fashion on a normal pre-trained Wikipedia model, then loading the numericalized tokens backwards and using a model that was trained on Wikipedia backwards, as well. At test time, we reversed each document and gave the normal ones to the forward model and backward ones the backward version, then averaged the of two runs, effectively reaping the benefits of using a bi-directional RNN without actually doing so. We call our design 2WD-UAV in reference to ensembling of two versions of RNN for authorship verification and because of it's ULMFit heritage. The architecture is implemented in PyTorch with elements of fast.ai library BID10. PAN We use all available authorship identification datasets released by PAN 1 (TAB2 . Each PAN dataset consists of a training and test corpus and each corpus has a various number of distinct problems. Each problem is composed of one to five writings by a single person (implicitly disjoint For PAN2014 and PAN2015 and explicitly disjoint for PAN2013), and one piece of writing of unknown authorship. In the other words, we are given up to five pairs of documents where one document's authorship is known and the other one's is not. Two documents of a pair might be from significantly different genres Table 3. Similarity functions. x, y: document feature vectors, n: # of features in x and y Chi2 kernel exp(−γ i [ DISPLAYFORM0]) Cosine similarity xy T /(||x||||y||) DISPLAYFORM1 and topics. The length of a document changes from a few hundred to a few thousand words. PAN2014 includes two datasets: Essays and Novels. The paired documents in PAN datasets are used for our experiments. For a problem P = (S, T), S (source) is the first document and T (target) is the second document of a PAN problem BID8. Amazon Reviews We use a dataset made by selecting 300 authors with at least 40 reviews to make the positive and negative candidate sets. Then, for each author, the positive candidate set is all possible and unique combinations of the author's reviews. A positive class consists of 4500 review pairs from this positive candidate set at random. The negative candidate set is made of all unique and possible combinations of review pairs having different authors. For this dataset, the negative class of equal size with the positive class was created by random selection from the negative candidate set. In prior work, 5-fold cross validation was used for this data. We do the same in order for our to be comparable. BID8. MLPA* This schema was created using MPLA-400 dataset that contains 20 articles by each of the top-20 authors by citation in Machine Learning BID2. In MLPA*, only publications from MPLA-400 that are written by a single author and have no co-authors are used BID8. To keep the distribution of authors and classes balanced, MPLA* contains an equal number of single-authorship articles from all existing 20 authors. The positive class consists of the pairs which are made up of all possible combinations of same-authorship articles (20 × 9 2 = 720). The negative class includes the pairs that are randomly selected from the set of all unique combinations of articles of different authorship and is of the same size as the positive class. Like Amazon Reviews, MLPA* dataset authors recommend using 5-fold cross validation BID8. We compare our method with the top methods of PAN AV competition between 2013 and 2015 (TAB2). The of each method for one year of the competition are available and we report them here. Our comparisons are not impacted by different parameter settings and implementation details of these methods as long as we keep the test and training sets the same as theirs. We choose several classifiers widely used in the area with the seven similarity measures to set strong baselines (Table 3). Since each example in our underlying dataset structure comprises two documents, we need to adapt it to the structure of an ordinary classifier input by converting them to one single entity. A simple direct way is to concatenate their feature vectors. However, our experiments show it provides weak mostly equal to the random label assignment. So, we define the summary vector as a single unit representative of each example/problem P = (D S, D T) by utilizing several similarity measures. The summary vector comprises a class of several metrics each measuring one aspect of the closeness of the two documents (D S and D T) of the pair for all underlying feature sets. For any two feature vector documents x, y their summary vector is sum(x, y) = [sim j i (x, y)] where sim j i (x, y) 1≤i≤M,1≤j≤F computes the ith similarity metric of M metrics in Table 3 under jth of F = 7 feature sets (Section 3.2) between x, y. Then, we use a suite of classifiers including SVM, Gaussian Naive Bayes (GNB), K-Nearest Neighbor (KNN), Logistic Regression (LR), Decision Tree (DT) and MultiLayer Perception (MLP) to predict the class label. All baselines are implemented by the scikit-learn library BID23. 2WD-UAV For our model, a number of important parameters are set. Most importantly, to achieve our , we make use of recent work on alternating learning rates, as well as one-cycle learning policy BID25 BID26. The basic approach to training is as follows:-Contract learning rate lr for one cycle -Freeze it and save -Give the learning rate on next layer a very large value -Freeze it and save unfreeze the previous one -Assign a very small value to the next layer -Continue cycling until gradients explode -Return the last saved checkpoint -this is the global minimumWe also use a range of momentum across layers, as well as different learning rates for each. For the optimizer we choose AdamW BID18, an improved version of Adam BID13 with better weight decay regularization. We begin with weight decay of 0.03 and regularize by adjusting it as training progresses. 2. Gaussian distribution is chosen for Naive Bayes. For K-Nearest Neighbor we set K=3. The L-2 regularization is used for Logistic Regression. For document expansion, we set the size of the sliding window to l = 10. On average it expands one document into 30 smaller documents for PAN datasets. All other parameters are selected based on pilot experiments. We report accuracy, the Area Under Receiver Operating Characteristic (ROC) curve BID3 (AUC). The higher AUC and Score indicate more effective classification. We compare our proposed model 2WD-UAV with several relevant baselines. Table 4 evaluates our model with PAN datasets for different years and also the best performing model in the relevant competition years for PAN. Results show that 2WD-UAV consistently outperforms all baselines and all best-reported models in PAN competitions for all years in the Score metric. The Score metric is essentially Accuracy × ROC thereby measuring joint performance gains as both ROC and accuracy are important. 2WD-UAV outperforms in Accuracy for all competitors in PAN14Essay and PAN13 dataset. It is the second best in PAN15 just offset by one decimal point. While it is not performing the best in accuracy for PAN14Novels, it yields competitive performances of accuracy and outperforms all others in ROC metric. 2WD-UAV also outperforms all other models in the ROC metric for PAN15. For PAN14E and PAN 13, it outperforms several baselines and offers stellar performance in ROC metric, just to be second to MLP and CNG respectively. While it is true that the proposed approach is not always the best performing on PAN data in every metric except Score, we believe one reason is due to the inherently smaller data sizes (both total words of data per author to train upon and also the total number of authors to scale up training) that make the approach a little weak. Hence, we further explored larger datasets of Amazon Reviews BID8 and MLPA* BID8 BID2 in Table 5 which shows significant performance gains in accuracy, defeating a variety of baselines. All in all, we do find stable and consistent performance gains with Table 5. Accuracy using 5-fold cross-validation on MLPA* and Amazon Reviews. Domain Adaptation Documents forming latent domains means that authorship verification is a separate but similar task for each domain. We cannot exploit the similarity between tasks directly because the data distributions are different, and not accounting for that while building a model would violate basic principles of machine learning BID21. Domain Adaptation (DA) addresses such problems by establishing knowledge transfer from a labeled source domain to an unlabeled (or partially labeled) target domain, and by exploring domain-invariant features or invariants which transfer across domains BID22 BID21 BID5 BID29. Authorship Verification In vast majority of the AV approaches, the writing style of a questioned author is known to us as we are given some scripts of the author and the task is to determine whether a piece of work is written by the same person. The depth of difference between two sets of documents is measured using the unmasking technique while ignoring the negative examples BID14. This one-class technique achieves high accuracy for 21 considerably large books (ebook above 500K). A simple feed forward three-layer neural network, an auto-encoder, is used for AV considering it a one-class classification problem BID19. They observe the behavior of the neural network for documents by different authors and build a classifier for each author. Their idea originates from one of the first applications of auto-encoder in classification as a novelty detector BID11. AV is also studied for detecting sock-puppets who deliberately change their writing styles to pass the filters and provide opinion Spam. A spy induction method is proposed to leverage the test data in training step under "out-of-training" setting BID7 where a questioned author is from a closed set of candidates while appearing unknown to the verifier. However, in a more realistic case we have no specified writing samples of a questioned author and there is no closed candidate set of authors. Since 2013, a surge of interest arose for this type of AV problem. BID24 investigate whether a document is one of the outliers in a corpus by generalizing the Many-Candidate method by BID15. The best method of PAN 2014 for Essays dataset optimizes a decision tree. Its method is enriched by adopting variety of features and similarity measures BID4. However, for the Novels dataset, the other dataset of that year, the best are achieved by an author verifier using fuzzy C-Means clustering BID20. In an alternative approach, BID16 generate a set of impostor documents and apply iterative feature randomization to compute the similarity distance between pairs of documents. One of the more interesting and powerful approaches investigates the language model of all authors using a shared recurrent layer and builds a classifier for each author BID0. Parallel recurrent neural network and transformation auto-encoder approaches were recently shown to produce excellent far a variety of AV problems BID8. The AV problem is also studied by a non Machine Learning model comprising of a compression algorithm, a dissimilarity method and a threshold. When evaluated on PAN datasets, this approach stands at first ranking position for the two out of four PAN datasets BID6. Recently, Linguistic traits of sock-puppets are deeply studied to verify the authorship of a pair of accounts in online discussion communities BID17. Authorship verification has always been a challenging problem. It can be even more difficult when no writing samples of questioned author/authors is given. In this paper, we explore the possibility of a more general approach to the problem, one that does not rely on having most of the authors within the training set. To this end, we use transfer and adversarial learning learning, data augmentation, ensemble methods, and cutting edge developments in training deep models to produce an architecture that is to the best of our knowledge novel at least to problem setting. Our design exhibits a high degree of robustness and stability when dealing with out-of-sample (previously unseen) authors and lack of training data and delivers state-of-the-art performance. | We propose and end-to-end model-building process that is universally applicable to a wide variety of authorship verification corpora and outperforms state-of-the-art with little to no modification or fine-tuning. | 1,245 | scitldr |
We consider tackling a single-agent RL problem by distributing it to $n$ learners. These learners, called advisors, endeavour to solve the problem from a different focus. Their advice, taking the form of action values, is then communicated to an aggregator, which is in control of the system. We show that the local planning method for the advisors is critical and that none of the ones found in the literature is flawless: the \textit{egocentric} planning overestimates values of states where the other advisors disagree, and the \textit{agnostic} planning is inefficient around danger zones. We introduce a novel approach called \textit{empathic} and discuss its theoretical aspects. We empirically examine and validate our theoretical findings on a fruit collection task. When a person faces a complex and important problem, his individual problem solving abilities might not suffice. He has to actively seek for advice around him: he might consult his relatives, browse different sources on the internet, and/or hire one or several people that are specialised in some aspects of the problem. He then aggregates the technical, ethical and emotional advice in order to build an informed plan and to hopefully make the best possible decision. A large number of papers tackle the decomposition of a single Reinforcement Learning task into several simpler ones. They generally follow a method where agents are trained independently and generally greedily to their local optimality, and are aggregated into a global policy by voting or averaging. Recent works BID12 BID30 prove their ability to solve problems that are intractable otherwise. Section 2 provides a survey of approaches and algorithms in this field. Formalised in Section 3, the Multi-Advisor RL (MAd-RL) partitions a single-agent RL into a MultiAgent RL problem BID22, under the widespread divide & conquer paradigm. Unlike Hierarchical RL BID2 BID19 BID4, this approach gives them the role of advisor: providing an aggregator with the local Q-values for all actions. The advisors are said to have a focus: reward function, state space, learning technique, etc. The MAd-RL approach allows therefore to tackle the RL task from different focuses. When a person is consulted for an advice by a enquirer, he may answer egocentrically: as if he was in charge of next actions, agnostically: anticipating any future actions equally, or empathically: by considering the next actions of the enquirer. The same approaches are modelled in the local advisors' planning methods. Section 4 shows that the egocentric planning presents the severe theoretical shortcoming of inverting a max into a max in the global Bellman equation. It leads to an overestimation of the values of states where the advisors disagree, and creates an attractor phenomenon, causing the system to remain static without any tie-breaking possibilities. It is shown on a navigation task that attractors can be avoided by lowering the discount factor γ under a given value. The agnostic planning BID30 has the drawback to be inefficient in dangerous environments, because it gets easily afraid of the controller performing a bad sequence of actions. Finally, we introduce our novel empathic planning and show that it converges to the global optimal Bellman equation when all advisors are training on the full state space.van BID29 demonstrate on a fruit collection task that a distributed architecture significantly speeds up learning and converges to a better solution than non distributed baselines. Section 5.2 extends those and empirically validates our theoretical analysis: the egocentric planning gets stuck in attractors with high γ values; with low γ values, it gets high scores but is also very unstable as soon as some noise is introduced; the agnostic planning fails at efficiently gathering the fruits near the ghosts; despite lack of convergence guarantees with partial information in advisors' state space, our novel empathic planning also achieves high scores while being robust to noise. Task decomposition -Literature features numerous ways to distribute a single-agent RL problem over several specialised advisors: state space approximation/reduction BID0, reward segmentation BID2 BID8 BID31 BID30, algorithm diversification BID32 BID14, algorithm randomization BID1 BID10, sequencing of actions BID27, or factorisation of actions BID15. In this paper, we mainly focus on reward segmentation and state space reduction but the findings are applicable to any family of advisors. BID23 are the first to propose to merge Markov decision Processes through their value functions. It makes the following strong assumptions: positive rewards, model-based RL, and local optimality is supposed to be known. Finally, the algorithm simply accompanies a classical RL algorithm by pruning actions that are known to be suboptimal. BID24 propose to use a local SARSA online learning algorithm for training the advisors, but they elude the fact that the online policy cannot be locally accurately estimated with partial state space, and that this endangers the convergence properties. BID21 study more in depth the theoretical guaranties of convergence to optimality with the local Q-learning, and the local SARSA algorithms. However, their work is limited in the fact that they do not allow the local advisors to be trained on local state space. van BID30 relax this assumption at the expense of optimality guarantees and beat one of the hardest Atari games: Ms. Pac-Man, by decomposing the task into hundreds of subtasks that are trained in parallel. MAd-RL can also be interpreted as a generalisation of Ensemble Learning BID5 for RL. As such, BID25 use a boosting algorithm in a RL framework, but the boosting is performed upon policies, not RL algorithms. In this sense, this article can be seen as a precursor to the policy reuse algorithm BID7 rather than a multi-advisor framework. BID32 combine five online RL algorithms on several simple RL problems and show that some mixture models of the five experts performs generally better than any single one alone. Each algorithm tackles the whole task. Their algorithms were off-policy, on-policy, actor-critics, etc. BID6 continue this effort in a very specific setting where actions are explicit and deterministic transitions. We show in Section 4 that the planning method choice is critical and that some recommendations can be made in accordance to the task definition. In BID11, while all advisors are trained on different reward functions, these are potential based reward shaping variants of the same reward function. They are therefore embedding the same goals. As a consequence, it can be related to a bagging procedure. The advisors recommendation are then aggregated under the HORDE architecture BID28, with egocentric planning. Two aggregator functions are tried out: majority voting and ranked voting. BID14 follow a different approach in which, instead of boosting the weak advisors performances by aggregating their recommendation, they select the best advisor. This approach is beneficial for staggered learning, or when one or several advisors may not find good policies, but not for variance reduction brought by the committee, and it does not apply to compositional RL.The UNREAL architecture BID12 improves the state-of-the art on Atari and Labyrinth domains by training their deep network on auxiliary tasks. They do it in an unsupervised manner and do not consider each learner as a direct contributor to the main task. The bootstrapped DQN architecture BID17 also exploits the idea of multiplying the Q-value estimations to favour deep exploration. As a , UNREAL and bootstrapped DQN do not allow to break down a task into smaller, tractable pieces. As a summary, a large variety of papers are published on these subjects, differing by the way they factorise the task into subtasks. Theoretical obstacles are identified in BID23 and BID21, but their analysis does not go further than the non-optimality observation in the general case. In this article, we accept the non-optimality of the approach, because it naturally comes from the simplification of the task brought by the decomposition and we analyse the pros and cons of each planning methods encountered in the literature. But first, Section 3 lays the theoretical foundation for Multi-Advisor RL.Domains -Related works apply their distributed models to diverse domains: racing BID21, scheduling BID21, dialogue BID14, and fruit collection BID23 BID24 BID29 b). Pac-Boy is yellow, the corridors are in black, the walls in grey, the fruits are the white dots and the ghosts are in red. The fruit collection task being at the centre of attention, it is natural that we empirically validate our theoretical findings on this domain: the Pac-Boy game (see FIG0, borrowed from van BID29 . Pac-Boy navigates in a 11x11 maze with a total of 76 possible positions and 4 possible actions in each state: A = {N, W, S, E}, respectively for North, West, South and East. Bumping into a wall simply causes the player not to move without penalty. Since Pac-Boy always starts in the same position, there are 75 potential fruit positions. The fruit distribution is randomised: at the start of each new episode, there is a 50% probability for each position to have a fruit. A game lasts until the last fruit has been eaten, or after the 300 th time step. During an episode, fruits remain fixed until they get eaten by Pac-Boy. Two randomly-moving ghosts are preventing Pac-Boy from eating all the fruits. The state of the game consists of the positions of PacBoy, fruits, and ghosts: 76 × 2 75 × 76 2 ≈ 10 28 states. Hence, no global representation system can be implemented without using function approximation. Pac-Boy gets a +1 reward for every eaten fruit and a −10 penalty when it is touched by a ghost. Markov Decision Process -The Reinforcement Learning (RL) framework is formalised as a Markov Decision Process (MDP). An MDP is a tuple X, A, P, R, γ where X is the state space, A is the action space, P: X × A → X is the Markovian transition stochastic function, R: X × A → R is the immediate reward stochastic function, and γ is the discount factor. A trajectory x(t), a(t), x(t + 1), r(t) t∈ 0,T −1 is the projection into the MDP of the task episode. The goal is to generate trajectories with high discounted cumulative reward, also called more succinctly return: DISPLAYFORM0 To do so, one needs to find a policy π: DISPLAYFORM1 Advisor Advisor... Figure 2: The MAd-RL architectureMAd-RL structure -This section defines the Multi-Advisor RL (MAd-RL) framework for solving a single-agent RL problem. The n advisors are regarded as specialised, possibly weak, learners that are concerned with a sub part of the problem. Then, an aggregator is responsible for merging the advisors' recommendations into a global policy. The overall architecture is illustrated in Figure 2. At each time step, an advisor j sends to the aggregator its local Q-values for all actions in the current state. Aggregating advisors' recommendations -In Figure 2, the f function's role is to aggregate the advisors' recommendations into a policy. More formally, the aggregator is defined with f: R n×|A| → A, which maps the received q j = Q j (x j, •) values into an action of A. This article focuses on the analysis of the way the local Q j -functions are computed. From the values q j, one can design a f function that implements any aggregator function encountered in the Ensemble methods literature BID5: voting schemes BID9, Boltzmann policy mixtures BID32 and of course linear value-function combinations BID25 BID21 BID30. For the further analysis, we restrict ourselves to the linear decomposition of the rewards: R(x, a) = j w j R j (x j, a), which implies the same decomposition of return if they share the same γ. The advisor's state representation may be locally defined by φ j: X → X j, and its local state is denoted by x j = φ j (x) ∈ X j. We define the aggregator function f Σ (x) as being greedy over the Q j -functions aggregation Q Σ (x, a): DISPLAYFORM0 We recall hereunder the main theoretical of BID30: a theorem ensuring, under conditions, that the advisors' training eventually converges. Note that by assigning a stationary behaviour to each of the advisors, the sequence of random variables X 0, X 1, X 2,..., with X t ∈ X is a Markov chain. For later analysis, we assume the following. Assumption 1. All the advisors' environments are Markov: DISPLAYFORM1 Theorem 1 (van BID30). Under Assumption 1 and given any fixed aggregator, global convergence occurs if all advisors use off-policy algorithms that converge in the single-agent setting. Although Theorem 1 guarantees convergence, it does not guarantee the optimality of the converged solution. Moreover, this fixed point only depends on each advisor model and on their planning methods (see Section 4), but not on the particular optimisation algorithms that are used by them. This section present three planning methods at the advisor's level. They differ in the policy they evaluate: egocentric planning evaluates the local greedy policy, agnostic planning evaluates the random policy, and empathic planning evaluates the aggregator's greedy policy. The most common approach in the literature is to learn off-policy by bootstrapping on the locally greedy action: the advisor evaluates the local greedy policy. This planning, referred to in this paper as egocentric, has already been employed in BID23, BID21, BID11, and van BID29. Theorem 1 guarantees for each advisor j the convergence to the local optimal value function, denoted by Q ego j, which satisfies the Bellman optimality equation: DISPLAYFORM0 where the local immediate reward r j is sampled according to R j (x j, a), and the next local state x j is sampled according to P j (x j, a). In the aggregator global view, we get: By construction, r = j w j r j, and therefore we get: DISPLAYFORM1 DISPLAYFORM2 Egocentric planning suffers from an inversion between the max and operators and, as a consequence, it overestimate the state-action values when the advisors disagree on the optimal action. This flaw has critical consequences in practice: it creates attractor situations. Before we define and study them formally, let us explain attractors with an illustrative example based on the simple MDP depicted in FIG1. In initial state x 0, the system has three possible actions: stay put (action a 0), perform advisor 1's goal (action a 1), or perform advisor 2's goal (action a 2). Once achieving a goal, the trajectory ends. The Q-values for each action are easy to compute: DISPLAYFORM3 As a consequence, if γ > r 1 /(r 1 + r 2) and γ > r 2 /(r 1 + r 2), the local egocentric planning commands to execute action a 0 sine die. This may have some apparent similarity with the Buridan's ass paradox BID33 BID20: a donkey is equally thirsty and hungry and cannot decide to eat or to drink and dies of its inability to make a decision. The determinism of judgement is identified as the source of the problem in antic philosophy. Nevertheless, the egocentric sub-optimality does not come from actions that are equally good, nor from the determinism of the policy, since adding randomness to the system will not help. Let us define more generally the concept of attractors. Definition 1. An attractor x is a state where the following strict inequality holds: DISPLAYFORM4 Theorem 2. State x is attractor, if and only if the optimal egocentric policy is to stay in x if possible. Note that there is no condition in Theorem 2 (proved in appendix, Section A) on the existence of actions allowing the system to be actually static. Indeed, the system might be stuck in an attractor set, keep moving, but opt to never achieve its goals. To understand how this may happen, just replace state x 0 in FIG1 with an attractor set of similar states: where action a 0 performs a random transition in the attractor set, and actions a 1 and a 2 respectively achieve tasks of advisors 1 and 2. Also, it may happen that an attractor set is escapable by the lack of actions keeping the system in an attractor set. For instance, in FIG1, if action a 0 is not available, x 0 remains an attractor, but an unstable one. Definition 2. An advisor j is said to be progressive if the following condition is satisfied: DISPLAYFORM5 The intuition behind the progressive property is that no action is worse than losing one turn to do nothing. In other words, only progress can be made towards this task, and therefore non-progressing actions are regarded by this advisor as the worst possible ones. Theorem 3. If all the advisors are progressive, there cannot be any attractor. The condition stated in Theorem 3 (proved in appendix, Section A) is very restrictive. Still, there exist some RL problems where Theorem 3 can be applied, such as resource scheduling where each advisor is responsible for the progression of a given task. Note that a MAd-RL setting without any attractors does not guarantee optimality for the egocentric planning. Most of RL problems do not fall into this category. Theorem 3 neither applies to RL problems with states that terminate the trajectory while some goals are still incomplete, nor to navigation tasks: when the system goes into a direction that is opposite to some goal, it gets into a state that is worse than staying in the same position. Navigation problem attractors -We consider the three-fruit attractor illustrated in FIG2: moving towards a fruit, makes it closer to one of the fruits, but further from the two other fruits (diagonal moves are not allowed). The expression each action Q-value is as follows: DISPLAYFORM6 As a , the aggregator would opt to go South and hit the wall indefinitely. More generally in a deterministic 1 task where each action a in a state x can be cancelled by a new action a -1 x, it can be shown that the condition on γ is a function of the size of the action set A. Theorem 4 is proved in Section A of the appendix. Theorem 4. State x ∈ X is guaranteed not to be an attractor if ∀a ∈ A, ∃a -1x ∈ A, such that P (P (x, a), a The agnostic planning does not make any prior on future actions and therefore evaluates the random policy. Once again, Theorem 1 guarantees the convergence of the local optimisation process to its local optimal value, denoted by Q agn j, which satisfies the following Bellman equation: DISPLAYFORM0 Local agnostic planning is equivalent to the global agnostic planning. Additionally, as opposed to the egocentric case, r.h.s. of the above equation does not suffer from max-inversion. It then follows that no attractor are present in agnostic planning. Nevertheless, acting greedily with respect to Q agn Σ (x, a) only guarantees to be better than a random policy and in general may be far from being optimal. Still, agnostic planning has proved its usefulness on Ms. Pac-Man (van b). A novel approach, inspired from the online algorithm found in BID24; BID21 is to locally predict the aggregator's policy. In this method, referred to as empathic, the aggregator is in control, and the advisors are evaluating the current aggregator's greedy policy f with respect to their local focus. More formally, the local Bellman equilibrium equation is the following one: Q ap j (x j, a) = E r j + γQ ap j (x j, f Σ (x)). Theorem 5. Assuming that all advisors are defined on the full state space, MAd-RL with empathic planning converges to the global optimal policy. Proof. DISPLAYFORM0 Σ is the unique solution to the global Bellman optimality equation, and therefore equals the optimal value function, quod erat demonstrandum. However, most MAd-RL settings involve taking advantage of state space reduction to speed up learning, and in this case, there is no guarantee of convergence because function f Σ (x) can only be approximated from the local state space scope. As a the local estimatef j (x) is used instead of f Σ (x) and the reconstruction of max a ∈A Q ap Σ (x, a) is not longer possible in the global Bellman equation: DISPLAYFORM1 For this experiment, we intend to show that the value function is easier to learn with the MAd-RL architecture. We consider a fruit collection task where the agent has to navigate through a 5 × 5 grid and receives a +1 reward when visiting a fruit cell (5 are randomly placed at the beginning of each episode). A deep neural network (DNN) is fitted to the ground-truth value function V π * γ for various objective functions: TSP is the optimal number of turns to gather all the fruits, RL is the optimal return, and egocentric is the optimal MAd-RL return. This learning problem is fully supervised on 1000 samples, allowing us to show how fast a DNN can capture V π * γ while ignoring the burden of finding the optimal policy and estimating its value functions through TD-backups or value iteration. To evaluate the DNN's performance, actions are selected greedily by moving the agent up, down, left, or right to the neighbouring grid cell of highest value. Section B.1 of the appendix gives the details. FIG6 displays the performance of the theoretical optimal policy for each objective function in dashed lines. Here TSP and RL targets largely surpass MAd-RL one. But FIG6 also displays the performances of the networks trained on the limited data of 1000 samples, for which the are completely different. The TSP objective target is the hardest to train on. The RL objective target follows as the second hardest to train on. The egocentric planning MAd-RL objective is easier to train on, even without any state space reduction, or even without any reward/return decomposition (summed version). Additionally, if the target value is decomposed (vector version), the training is further accelerated. Finally, we found that the MAd-RL performance tends to dramatically decrease when γ gets close to 1, because of attractors' presence. We consider this small experiment to show that the complexity of objective function is critical and that decomposing it in the fashion of MAd-RL may make it simpler and therefore easier to train, even without any state space reduction. In this section, we empirically validate the findings of Section 4 in the Pac-Boy domain, presented in Section 2. The MAd-RL settings are associating one advisor per potential fruit location. The local state space consists in the agent position and the existence -or not-of the fruit. Six different settings are compared: the two baselines linear Q-learning and DQN-clipped, and four MAd-RL settings: egocentric with γ = 0.4, egocentric with γ = 0.9, agnostic with γ = 0.9, and empathic with γ = 0.9. The implementation and experimentation details are available in the appendix, at Section B.2.We provide links to 5 video files (click on the blue links) representing a trajectory generated at the 50 th epoch for various settings. egocentric-γ = 0.4 adopts a near optimal policy coming close to the ghosts without taking any risk. The fruit collection problem is similar to the travelling salesman problem, which is known to be NP-complete BID18. However, the suboptimal small-γ policy consisting of moving towards the closest fruits is in fact a near optimal one. Regarding the ghost avoidance, egocentric with small γ gets an advantage over other settings: the local optimisation guarantees a perfect control of the system near the ghosts. The most interesting outcome is the presence of the attractor phenomenon in egocentric-γ = 0.9: Pac-Boy goes straight to the centre area of the grid and does not move until a ghost comes too close, which it still knows to avoid perfectly. This is the empirical confirmation that the attractors, studied in Section 4.1, present a real practical issue. empathic is almost as good as egocentric-γ = 0.4. agnostic proves to be unable to reliably finish the last fruits because it is overwhelmed by the fear of the ghosts, even when they are still far away. This feature of the agnostic planning led van BID30 to use a dynamic normalisation depending on the number of fruits left on the board. Finally, we observe that DQN-clipped also struggles to eat the last fruits. The quantitative analysis displayed in FIG6 confirms our qualitative video-based impressions. egocentric-γ = 0.9 barely performs better than linear Q-learning, DQN-clipped is still far from the optimal performance, and gets hit by ghosts from time to time. agnostic is closer but only rarely eats all the fruits. Finally, egocentric-γ = 0.4 and empathic are near-optimal. Only egocentric-γ = 0.4 trains a bit faster, and tends to finish the game 20 turns faster too (not shown).Results with noisy rewards -Using a very small γ may distort the objective function and perhaps even more importantly the reward signal diminishes exponentially as a function of the distance to the number of turns network trained on vector egocentric objective network trained on summed egocentric objective network trained on RL objective network trained on TSP objective optimal policy given egocentric objective optimal policy given RL objective optimal policy given TSP objective (a) Value function training. goal, which might have critical consequences in noisy environments, hence the following experiment: several levels of Gaussian centred white noise η σ with standard deviation σ ∈ {0.01, 0.1} have been applied to the reward signal: at each turn, each advisor now receivesr j = r j + η σ instead. Since the noise is centred and white, the ground truth Q-functions remain the same, but their estimators obtained during sampling is corrupted by noise variance. Empirical displayed in FIG6 shows that the empathic planning performs better than the egocentric one even under noise with a 100 times larger variance. Indeed, because of the noise, the fruit advisors are only able to consistently perceive the fruits that are in a radius dependent on γ and σ. The egocentric planning, incompatible with high γ values, is therefore myopic and cannot perceive distant fruits. The same kind of limitations are expected to be encountered for small γ values when the local advisors rely on state approximations, and/or when the transitions are stochastic. This also supports the superiority of the empathic planning in the general case. This article presented MAd-RL, a common ground for the many recent and successful works decomposing a single-agent RL problem into simpler problems tackled by independent learners. It focuses more specifically on the local planning performed by the advisors. Three of them -two found in the literature and one novel -are discussed, analysed and empirically compared: egocentric, agnostic, and empathic. The lessons to be learnt from the article are the following ones. The egocentric planning has convergence guarantees but overestimates the values of states where the advisors disagree. As a consequence, it suffers from attractors: states where the no-op action is preferred to actions making progress on a subset of subtasks. Some domains, such as resource scheduling, are identified as attractor-free, and some other domains, such as navigation, are set conditions on γ to guarantee the absence of attractor. It is necessary to recall that an attractor-free setting means that the system will continue making progress towards goals as long as there are any opportunity to do so, not that the egocentric MAd-RL system will converge to the optimal solution. The agnostic planning also has convergence guarantees, and the local agnostic planning is equivalent to the global agnostic planning. However, it may converge to bad solutions. For instance, in dangerous environments, it considers all actions equally likely, it favours staying away from situation where a random sequence of actions has a significant chance of ending bad: crossing a bridge would be avoided. Still, the agnostic planning simplicity enables the use of general value functions BID28 BID30.The empathic planning optimises the system according to the global Bellman optimality equation, but without any guarantee of convergence, if the advisor state space is smaller than the global state. In our experiments, we never encountered a case where the convergence was not obtained, and on the Pac-Boy domain, it robustly learns a near optimal policy after only 10 epochs. It can also be safely applied to Ensemble RL tasks where all learners are given the full state space. Theorem 1 BID30 ). Under Assumption 1 and given any fixed aggregator, global convergence occurs if all advisors use off-policy algorithms that converge in the single-agent setting. Proof. Each advisor can be seen as an independent learner training from trajectories controlled by an arbitrary behavioural policy. If Assumption 1 holds, each advisor's environment is Markov and off-policy algorithms can be applied with convergence guarantees. Theorem 2. State x is attractor, if and only if the optimal egocentric policy is to stay in x if possible. Proof. The logical biconditional will be demonstrated by successively proving the two converse conditionals. First, the sufficient condition: let us assume that state x is an attractor. By Definition 1, if state x is an attractor, we have: DISPLAYFORM0 Let a 0 denote the potential action to stay in x, and consider the MDP augmented with a 0 in x. Then, we have: DISPLAYFORM1 which proves that a 0, if possible, will be preferred to any other action a ∈ A.Second, the reciprocal condition: let us assume that, in state x, action a 0 would be preferred by an optimal policy under the egocentric planning. Then: DISPLAYFORM2 which proves that x is an attractor. Theorem 3. If all the advisors are progressive, there cannot be any attractor. Proof. Let sum Definition 2 over advisors: DISPLAYFORM3 which proves the theorem. Theorem 4. State x ∈ X is guaranteed not to be an attractor if:• ∀a ∈ A, ∃a -1x ∈ A, such that P (P (x, a), a -1 DISPLAYFORM4 Under review as a conference paper at ICLR 2018Proof. Let us denote J x a as the set of advisors for which action a is optimal in state x. Q ego a (x) is defined as the sum of perceived value of performing a in state x by the advisors that would choose it: DISPLAYFORM5 Let a + be the action that maximises this Q ego a (x) function: DISPLAYFORM6 We now consider the left hand side of the inequality characterising the attractors in Definition 1: DISPLAYFORM7 Since R(x, a +) ≥ 0, and since the a maximising Q ego j (x j, a) is at least as good as the cancelling action (a +)-1x, we can follow with: DISPLAYFORM8 By comparing this last with the right hand side of Definition 1, the condition for x not being an attractor becomes: DISPLAYFORM9 It follows directly from the inequality Q Similarly to the , we incorporate the location of the fruits into the state representation by using a 50 dimensional bit vector, where the first 25 entries are used for fruit positions, and the last 25 entries are used for the agent's position. The DNN feeds this bit-vector as the input layer into two dense hidden layers with 100 and then 50 units. The output is a single linear head representing the state-value, or a multiple in the case of the vector MAd-RL target. In order to assess the value function complexity, we train for each discount factor setting a DNN of fixed size on 1000 random states with their ground truth values. Each DNN is trained over 500 epochs using the Adam optimizer with default parameters. Four different objective function targets are considered:• The TSP objective function target is the natural objective function, as defined by the Travelling Salesman Problem: the number of turns to gather all the fruits: DISPLAYFORM0 where k is the number of fruits remaining in statex, where Σ k is the ensemble of all permutations of integers between 1 and k, where σ is one of those permutations, where x 0 is the position of the agent in x, where x i for 1 ≤ i ≤ k is the position of fruit with index i, where d(x i, x j) is the distance (||·|| 1 in our gridworld) between positions x i and x j.• The RL objective function target is the objective function defined for an RL setting, which depends on the discount factor γ: DISPLAYFORM1 with the same notations as for TSP.• The summed egocentric planning MAd-RL objective function target does not involve the search into the set of permutations and can be considered simpler to this extent: DISPLAYFORM2 with the same notations as for TSP.• The vector egocentric planning MAd-RL objective function target is the same as the summed one, except that the target is now a vector, separated into as many channels as potential fruit position: x0,xi) if there is a fruit in x i, 0 otherwise. DISPLAYFORM3 MAd-RL Setup -Each advisor is responsible for a specific source of reward (or penalty). More precisely, we assign an advisor to each possible fruit location. This advisor sees a +1 reward only if a fruit at its assigned position gets eaten. Its state space consists of Pac-Boy's position, ing in 76 states. In addition, we assign an advisor to each ghost. This advisor receives a -10 reward if Pac-Boy bumps into its assigned ghost. Its state space consists of Pac-Boy's and ghost's positions, ing in 76 2 states. A fruit advisor is only active when there is a fruit at its assigned position. Because there are on average 37.5 fruits, the average number of advisors running at the beginning of each episode is 39.5. Each fruit advisor is set inactive when its fruit is eaten. The learning was performed through Temporal Difference updates. Due to the small state spaces for the advisors, we can use a tabular representation. We train all learners in parallel with off-policy learning, with Bellman residuals computed as presented in Section 4 and a constant α = 0.1 parameter. The aggregator function sums the Q-values for each action a ∈ A: Q Σ (x, a):= j Q j (x j, a), and uses -greedy action selection with respect to these summed values. Because ghost agents have exactly identical MDP, we also benefit from direct knowledge transfer by sharing their Q-tables. One can notice that Assumption 1 holds in this setting and that, as a consequence, Theorem 1 applies for the egocentric and agnostic planning methods. Theorem 4 determines sufficient conditions for not having any attractor in the MDP. In the Pac-Boy domain, the cancelling action condition is satisfied for every x ∈ X. As for the γ condition, it is not only sufficient but also necessary, since being surrounded by goals of equal value is an attractor if γ > 1/3. In practice, an attractor becomes stable only when there is an action enabling it to remain in the attraction set. Thus, the condition for not being stuck in an attractor set can be relaxed to γ ≤ 1/(|A|−2). Hence, the of γ > 1/2 in the example illustrated by FIG2.Baselines -The first baseline is the standard DQN algorithm BID16 with reward clipping (referred to as DQN-clipped). Its input is a 4-channel binary image with the following features: the walls, the ghosts, the fruits, or Pac-Boy. The second baseline is a system that uses the exact same input features as the MAd-RL model. Specifically, the state of each advisor of the MAd-RL model is encoded with a one-hot vector and all these vectors are concatenated, ing in a sparse binary feature vector of size 17, 252. This vector is used for linear function approximation with Q-learning. We refer to this setting with linear Q-learning. We also tried to train deep architectures from these features with no success. Experimental setting -Time scale is divided into 50 epochs lasting 20,000 transitions each. At the end of each epoch an evaluation phase is launched for 80 games. The theoretical expected maximum score is 37.5 and the random policy average score is around -80.Explicit links to the videos (www.streamable.com website was used to ensure anonymity, if accepted the videos will be linked to a more sustainable website):• egocentric-γ = 0.4: https://streamable.com/6tian• egocentric-γ = 0.9: https://streamable.com/sgjkq• empathic: https://streamable.com/h6gey• agnostic: https://streamable.com/grswh• DQN-clipped: https://streamable.com/emh6y | We consider tackling a single-agent RL problem by distributing it to $n$ learners. | 1,246 | scitldr |
We present a hybrid framework that leverages the trade-off between temporal and frequency precision in audio representations to improve the performance of speech enhancement task. We first show that conventional approaches using specific representations such as raw-audio and spectrograms are each effective at targeting different types of noise. By integrating both approaches, our model can learn multi-scale and multi-domain features, effectively removing noise existing on different regions on the time-frequency space in a complementary way. Experimental show that the proposed hybrid model yields better performance and robustness than using each model individually. We first describe the objective function and the selected modules that have been reported to show 36 competitive performance using either raw-audio or spectrogram input. Selected models are 37 each used later as components of our proposed hybrid model. We employ the energy-conserving loss function proposed in which simultaneously considers 40 speech and noise signals. Let the noisy input x consist of clean speech s and noise n. The estimated 41 speech by the model is referred to asŝ. Then, our objective function is defined as follows: L(x, s, n,ŝ) = s −ŝ 1 + n −n 1,wheren = x −ŝ represents the estimated noise signal and · 1 denotes 1 norm. We construct the time domain network based on TasNet which employs one-dimensional dilated 45 convolution to handle long time sequences of raw-audio. TasNet has shown competitive sample quality 46 for speech source separation, which is a similar task to speech enhancement. In our experiments, we used a reduced version of TasNet. With a slight abuse of notation, we refer to the network as noise from the time-frequency space. We hybridize both time and T-F domain networks in a cascaded way (Fig. 1 Figure 1 : A schematic illustration of the hybrid system (MDPhD). Note that the network of the same domain (same color) shares the parameters. For the time-frequency (T-F) domain network, we convert the time-domain input to a spectrogram using the short time Fourier transform (STFT), whose output is converted back to a waveform using the inverse short time Fourier transform (iSTFT).The final objective of the hybrid model with auxiliary loss becomes DISPLAYFORM0 where θ denotes the network parameter. that of U-Net with doubled parameter size (TAB3). Note that, however, TasNet fails to remove high 88 frequency noise, which is supposedly hard to capture in the time domain FIG2. the U → D model shares the weakness of U-Net and vice versa. We conjecture that this happens Using the test dataset, we compared our to recent studies of speech enhancement field. Our model showed the best performance quantitatively and qualitatively among the others under TAB4: Comparison with other methods. The predicted rating of speech distortion (CSIG), distortion (CBAK) and overall quality (COVL) are reported (from 1 to 5, higher is better). PESQ (from -0.5 to 4.5, higher is better) stands for perceptual evaluation of speech quality and SSNR (higher is better) is segmental SNR. The best for each measure is given in bold style. Table 3: SNR evaluation of models with various objective functions. D and U denote the TasNet (reduced) using one-dimensional dilated convolution and U-Net, respectively. The type of objective functions are noted next to the model name. 1 represents our baseline objective function. 2 represents an objective function that substitutes the 1 term of equation FORMULA0 In this section, we present the detailed configuration of the models we used. In the following figures, Figure 3: U-Net (1.5M) architecture. 2D Conv means a two-dimensional convolution block consisting of a two-dimensional convolution operation with filter size F (height, width), stride size S (height, width) and output channel size C followed by batch renormalization and leaky-RELU activation function. 2D t-Conv means a two-dimensional transposed convolution block. Our baseline models used in experiments process the log-magnitude of the input spectrogram. | A hybrid model utilizing both raw-audio and spectrogram information for speech enhancement tasks. | 1,247 | scitldr |
Consistently checking the statistical significance of experimental is the first mandatory step towards reproducible science. This paper presents a hitchhiker's guide to rigorous comparisons of reinforcement learning algorithms. After introducing the concepts of statistical testing, we review the relevant statistical tests and compare them empirically in terms of false positive rate and statistical power as a function of the sample size (number of seeds) and effect size. We further investigate the robustness of these tests to violations of the most common hypotheses (normal distributions, same distributions, equal variances). Beside simulations, we compare empirical distributions obtained by running Soft-Actor Critic and Twin-Delayed Deep Deterministic Policy Gradient on Half-Cheetah. We conclude by providing guidelines and code to perform rigorous comparisons of RL algorithm performances. Reproducibility in Machine Learning and Reinforcement Learning in particular (RL) has become a serious issue in the recent years. As pointed out in Islam et al. BID0 and Henderson et al. BID1, reproducing the of an RL paper can turn out to be much more complicated than expected. In a thorough investigation, Henderson et al. BID1 showed it can be caused by differences in codebases, hyperparameters (e.g. size of the network, activation functions) or the number of random seeds used by the original study. Henderson et al. BID1 states the obvious: the claim that an algorithm performs better than another should be supported by evidence, which requires the use of statistical tests. Building on these observations, this paper presents a hitchhiker's guide for statistical comparisons of RL algorithms. The performances of RL algorithm have specific characteristics (they are independent of each other, they are not paired between algorithms etc.). This paper reviews some statistical tests relevant in that context and compares them in terms of false positive rate and statistical power. Beside simulations, it compares empirical distributions obtained by running Soft-Actor Critic (SAC) BID2 and Twin-Delayed DDPG (TD3) BID3 on Half-Cheetah BID4. We finally provide guidelines to perform robust difference testing in the context of RL. A repository containing the raw and the code to reproduce all experiments is available at https://github.com/ccolas/rl_stats. In this paper, we consider the problem of conducting meaningful comparisons of Algorithm 1 and Algorithm 2. Because the seed of the random generator is different for each run BID1, two runs of a same algorithm yield different measures of performance. An algorithm performance can therefore be modeled as a random variable X, characterized by a distribution. Measuring the performance x at the end of a particular run is equivalent to measuring a realization of that random variable. Repeating this N times, we obtain a sample x = (x 1, ..., x N) of size N. To compare RL algorithms on the basis of their performances, we focus on the comparisons of the central tendencies (µ 1, µ 2): the means or the medians of the associated random variables X 1, X 2. BID2 Unfortunately, we cannot know µ 1, µ 2 exactly. Given a sample x i of X i, we can estimate µ i by the empirical mean: DISPLAYFORM0 (resp. the empirical median). However, comparing central performances does not simply boil down to the comparison of their estimates. As an illustration, FIG0 shows two normal distributions describing the distributions of two algorithm performances X 1 and X 2. Two samples of sample size N = 3 are collected. In this example, we have µ 1 < µ 2 but x 1 > x 2. The rest of this text uses central performance to refer to either the mean or the median of the performance distribution i. It is noted µ i while its empirical estimate is noted x i. The distinction is made where necessary. Statistical difference testing. Statistical difference testing offers a principled way to compare the central performances of two algorithms. It defines two hypothesis: 1) the null hypothesis H 0: ∆µ = µ 1 −µ 2 = 0 and 2) the alternative hypothesis H a: |∆µ| > 0. When performing a test, one initially assumes the null hypothesis to be true. After having observed (x 1, x 2), statistical tests usually estimate the probability to observe two samples whose empirical central difference is at least as extreme as the observed one (|∆x| = |x 1 −x 2 |) under H 0 (e.g. given ∆µ = 0). This probability is called the p-value. If the p-value is very low, the test rejects H 0 and concludes that a true underlying difference (H a) is likely. When the p-value is high, the test does not have enough evidence to conclude. This could be due to the lack of true difference, or to the lack of statistical power (too few measurements given how noisy they are). The significance level α (usually ≤ 0.05) draws the line between rejection and conservation of H 0: if p-value < α, H 0 is rejected. Further experiments demonstrate it is not always the case, which is why we prefer to note the false positive rate α *. False negatives occur when the statistical test fails to recognize a true difference in the central performances. This depends on the size of the underlying difference: the larger the difference, the lower the risk of false negative. The false negative rate is noted β *.Trade-off between false positive and statistical power. Ideally, we would like to set α = 0 to ensure the lowest possible false positive rate α *. However, decreasing the confidence level makes the statistical test more conservative. The test requires even bigger empirical differences ∆x to reject H 0, which decreases the probability of true positive. This probability of true positive 1−β * is called the statistical power of a test. It is the probability to reject H 0 when H a holds. It is directly impacted by the effect size: the larger the effect size, the easier it is to detect (larger statistical power). It is also a direct function of the sample size: larger samples bring more evidence to support the rejection of H 0. Generally, the sample size is chosen so as to obtain a theoretical statistical power of 1−β * = 0.8. Different tests have different statistical powers depending on the assumptions they make, whether they are met, how the p-value is derived etc. Parametric vs. non-parametric. Parametric tests usually compare the means of two distributions by making assumptions on the distributions of the two algorithms' performances. Non-parametric tests on the other hand usually compare the medians and do not require assumptions on the type of distributions. Non-parametric tests are often recommended when one wants to compare median rather than means, when the data is skewed or when the sample size is small. Section 4.2 shows that these recommendations are not always justified. Test statistic. Statistical tests usually use a test statistic. It is a numerical quantity computed from the samples that summarizes the data. In the t-test for instance, the statistic t α is computed as t α = |∆x|/σ pool, where σ pool is the pooled standard deviation (σ pool = (σ 2 1 + σ 2 2)/2). Under the t-test assumptions, this statistic follows the analytic Student's distribution with density function f S (t). The probability to obtain a difference more important than the sample difference ∆x (p-value) can be rewritten p-value = P (|t| > t α) and can be computed as the area under f S (t) such that |t| > t α.Relative effect size. The relative effect size is the absolute effect size |∆µ|, scaled by the pooled standard deviation σ pool, such that = |∆µ|/σ pool. The relative effect size is independent of the spread of the considered distributions.3 Statistical Tests for RL Each test makes some assumptions (e.g. about the nature of the performance distributions, their variances, the sample sizes etc.). In the context of RL, some assumptions are reasonable while others are not. It is reasonable to assume that RL performances are measured at random and independently from one another. The samples are not paired, and here we assume they have the same size. BID3 Other common assumptions might be discussed:• Normal distributions of performances: this might not be the case (skewed distributions, bimodal distributions, truncated distributions).• Continuous performances: the support of the performance distribution might be bounded:e.g. in the Fetch environments of Gym BID4, the performance is a success rate in.• Known standard deviations: this is not the case in RL.• Equal standard deviations: this is often not the case (see BID1). This section briefly presents various statistical tests relevant to the comparison of RL performances. It focuses on the underlying assumptions BID5 and provides the corresponding implementation from the Python Scipy library when available. Further details can be found in any statistical textbook. Contrary to Henderson et al. BID1, we do not recommend using the Kolmogorov-Smirnov test as it tests for the equality of the two distributions and does not test for a difference in their central tendencies BID6.T-test. This parametric test compares the means of two distributions and assumes the two distributions have equal variances BID7. If this variance is known, a more powerful test is available: the Z-test for two population means. The test is accurate when the two distributions are normal, it gives an approximate guide otherwise. Implementation: scipy.stats.ttest_ind(x1, x2, equal_var=True).Welch's t-test. It is a t-test where the assumption of equal variances is relaxed BID8. Implementation: scipy.stats.ttest_ind(x1, x2, equal_var=False).Wilcoxon Mann-Whitney rank sum test. This non-parametric test compares the median of two distributions. It does not make assumptions about the type of distributions but assumes they are continuous and have the same shape and spread BID9. Implementation: scipy.stats.mannwhitneyu(x1, x2, alternative='two-sided').Ranked t-test. In this non-parametric test that compares the medians, all realizations are ranked together before being fed to a traditional t-test. Conover and Iman BID10 shows that the computed statistic is a monotonically increasing function of the statistic computed by the Wilcoxon MannWhitney test, making them really close. Implemented in our code. Bootstrap confidence interval test. In the bootstrap test, the sample is considered to be an approximation of the original distribution BID11. Given two observed samples (x 1, x 2) of size N, we obtain two bootstrap samples (x 1,x 2) of size N by sampling with replacement in (x 1, x 2) respectively and compute the difference in empirical means ∆x. This procedure is repeated a large number of times (e.g. 103). The distance between percentiles α×100 2 and 100(1− α 2) is considered to be the 100(1−α)% confidence interval around the true mean difference ∆µ. If it does not include 0, the test rejects the null hypothesis with confidence level α. This test does not require any assumptions on the performance distributions, but we will see it requires large sample sizes. Implementation: https://github.com/facebookincubator/bootstrapped. Permutation test. Under the null hypothesis, the realizations of both samples would come from distributions with the same mean. The empirical mean difference (∆x) should not be affected by the relabelling of the different realization (in average). The permutation test performs permutations of the realization labels and computes ∆x =x 1 −x 2. This procedure is repeated many times (e.g. 103). H 0 is rejected if the proportion of |∆x| that falls below the original difference |∆x| is higher than 1−α. Implemented in our code. This section compares the above statistical tests in terms of their false positive rates and statistical powers. A false positive rate estimates the probability to claim that two algorithms perform differently when H 0 holds. It impacts directly the reproducibility of a piece of research and should be as low as possible. Statistical power is the true positive rate and refers to the probability to find evidence for an existing effect. The following study is an extension of the one performed in BID12. We conduct experiments using models of RL distributions (analytic distributions) and true empirical RL distributions collected by running 192 trials of both SAC BID2 and TD3 BID3 on Half-Cheetah-v2 BID4 for 2M timesteps. Investigating the case of non-normal distributions. Several candidate distributions are selected to model RL performance distributions (FIG2): a standard normal distribution, a lognormal distribution and a bimodal distribution that is an even mixture of two normal distributions. All these distributions are tuned so that µ = 0, σ = 1. In addition we use two empirical distributions of size 192 collected from SAC and TD3.Investigating the case of unequal standard deviations. To investigate the effect of unequal standard deviations, we tune the distribution parameters to double the standard deviation of Algorithm 2 as compared to Algorithm 1. We also compare SAC and TD3 which have different standard deviations (σ T D3 = 1.15 σ SAC).Measuring false positive rates. To test for false positive rates α *, we simply enforce H 0 by aligning the central performances of the two distributions: µ 1 = µ 2 = 0 (the median for the MannWhitney test and the ranked t-test, the mean for others). Given one test, two distributions and a sample size, we sample x 1 and x 2 from distributions X 1, X 2 and compare them using the test with α = 0.05. We repeat this procedure N r = 10 3 times and estimate α * as the proportion of H 0 rejection. BID4 Using the spinning up implementation of OpenAI: https://github.com/openai/spinningupThe standard error of this estimate is: se(α *) = (α * (1−α *)/N r. It is smaller than the widths of the lines on the reported figures. This procedure is repeated for every test, every combination of distributions and for several sample sizes (see pseudo-code in the supplementary material).Measuring true positive rates (statistical power). Here, we enforce the alternative hypothesis H a by sampling x 1 from a given distribution centered in 0 (mean or median depending on the test), and x 2 from a distribution whose mean (resp. median) is shifted by an effect size ∆µ. Given one test, two distributions (the second being shifted) and the sample size, we repeat the procedure above and obtain an estimate of the true positive rate. Tables reporting the statistical powers for various effect sizes, sample sizes, tests and assumptions are made available in the supplementary . Same distributions, equal standard deviations., we can directly compare the mean estimates (the lines) to the significance level α = 0.05, the standard errors being smaller than the widths of these lines. BID5 α * is very large when using bootstrap tests, unless large sample sizes are used (>40). Using small sample sizes (<5), the permutation and the ranked t-test also show large α *. Results using two log-normal distributions show similar behaviors and can be found in the supplementary . Same distributions, unequal standard deviations. Here, we sample x 1 from a distribution, and x 2 from the same type of distribution with doubled standard deviation. Comparing two normal distributions with different standard deviation does not differ much from the case with equal standard deviations. Figure 4 (a) (bimodal distributions) shows that Mann-Whitney and ranked t-test (median tests) constantly overestimate α *, no matter the sample size (α * > 0.1). For log-normal distributions on the other hand (Figure 4(b) ), the false positive rate using these tests respects the confidence level (α * ≤ α) with sample sizes higher than N = 10. However, other tests tend to show large α *, even for large sample sizes (α * ≈ 0.07 up to N > 50).Different distributions, equal standard deviations. Now we compare samples coming from different distributions with equal standard deviations. Comparing normal and bimodal distributions of equal standard deviation does not impact much the false positive rates curves (similar to FIG3 (a)). However, FIG6 (a) and 5(b) show that when one of the two distributions is skewed (log-normal), the Mann-Whitney and the ranked t-test demonstrate very important false positive rate, a phenomenon that gets worse with larger sample sizes. Section 4.5 discusses why it might be the case. Different distributions, unequal standard deviations. We now combine different distributions and different standard deviations. As before, comparing a skewed distribution (log-normal) and a symmetric one leads to high false positive rates for the Mann-Whitney test and the ranked t-test BID5 We reproduced all the twice, hardly seeing any difference in the figures. All tests show similar estimations of statistical power. More than 50 samples are needed to detect a relative effect size = 0.5 with 80% probability, close to 20 with = 1 and a bit more than 10 with = 2. Tables reporting statistical power for Finally, we compare two empirical distributions obtained from running two RL algorithms (SAC, TD3) 192 times each, on Half-Cheetah. We observe a small increase in false positive rates when using the ranked t-test (Figure 7). The relative effect size estimated from the empirical distributions is = 0.80 (median), or = 0.93 (mean), in favor of SAC. For such relative effect sizes, the sample sizes required to achieve a statistical power of 0.8 are between 10 and 15 for tests comparing the mean and between 15 and 20 for tests comparing the median (see full table in supplementary ). Using a sample size N = 5 with the Welch's t-test, the effect size would need to be 3 to 4 times larger to be detected with 0.8 probability. No matter the distributions. From the above , it seems clear that the bootstrap test should never be used for sample sizes below N = 50 and the permutation test should never be used for sample sizes below N = 10. The bootstrap test in particular, uses the sample as an estimate of the true performance distribution. A small sample is a very noisy estimate, which leads to very high false positive rates. The ranked t-test shows a false positive rate of 0 and a statistical power of 0 when N = 2 in all conditions. As noted in BID12, comparing two samples of size N = 2 can in only four possible p-values (only 4 possible orders when ranked), none of which falls below α = 0.05. Such quantization issues make this test unreliable for small sample sizes, see BID12 for further comments and references on this issue. When distributions do not meet assumptions. In addition to the behaviors reported above, Section 4.2 shows that non-parametric tests (Mann-Whitney and ranked t-test) can demonstrate very high false positive rates when comparing a symmetric distribution with a skewed one (log-normal). This effect gets worse linearly with the sample size. When the sample size increases, the number of samples drawn in the skewed tail of the log-normal increases. All these realizations will be ranked above any realizations from the other distribution. Therefore, the larger the sample size, the more realization are ranked first in favor of the log-normal, which leads to a bias in the statistical test. This problem does not occur when two log-normal are compared to one another. Comparing a skewed distribution to a symmetric one violates the Mann-Whitney assumptions stating that distributions must have the same shape and spread. The false positive rates of Mann-Whitney and ranked t-test are also above the confidence level whenever a bimodal distribution is compared to another distribution. The traditional recommendation to use non-parametric tests when the distributions are not normal seems to be failing when the two distributions are different. Most robust tests. The t-test and the Welch's t-test were found to be more robust than others to violations of their assumptions. However, α * was found to be slightly above the required level (α * > α) when at least one of the two distributions is skewed (α * ≈ 0.1) no matter the sample size, and when one of the two distributions is bimodal, for small sample sizes N < 10. Welch's α * is always a bit lower than the t-test's α *.Statistical power. Except for the anomalies in small sample size mentioned above due to overconfident tests like the bootstrap or the permutation tests, statistical powers stay qualitatively stable no matter the distributions compared, or the test used: = 0.5: N ≈ 100; = 1: N ≈ 20 and = 2: N ≈ 5, 10. Measuring the performance of RL Algorithms. Before using any statistical test, one must obtain measures of performance. RL algorithms should ideally be evaluated offline. The algorithm performance after t steps is measured as the average of the returns over E evaluation episodes conducted independently from training, usually using a deterministic version of the current policy (e.g. E = 20). Evaluating agents by averaging performances over several training episodes in a much less interpretable performance measure and should be stated clearly. The collection of performance measures forms a learning curve. Representing learning curves. After obtaining a learning curve for each of the N runs, it can be rendered on a plot. At each evaluation, one can represent either the empirical mean or median. Whereas the empirical median directly represents the center of the collected sample, the empirical mean tries to model the sample as coming from a Gaussian distribution, and under this assumptions represents the maximum likelihood estimate of that center. Error bars should also be added to this plot. The standard deviation (SD) represents the variability of the performances, but is only representative when the values are approximately normally distributed. When it is not normal, one should prefer to represent interpercentile ranges (e.g. 10% − 90%). If the sample size is small (e.g. <10), the most informative solution is to represent all learning curves in addition to the mean or median. When performances are normally distributed, the standard error of the mean (SE) or confidence intervals can be used to represent estimates of the uncertainty on the mean estimate. Robust comparisons. Which test, which sample sizes? The in Section 4.2 advocate for the use of the Welch's t-test, which shows lower false positive rate and similar statistical powers than other tests. However, the false positive rate often remains superior to the confidence level α * > α when the distributions are not normal. When in doubt, we recommend using lower confidence levels α < 0.05 (e.g. α = 0.01) to ensure that α * < 0.05. The number of random seeds to be used to achieve a statistical power of 0.8 depends on the expected relative effect size: = 0.5: N ≈ 100; = 1: N ≈ 20 and = 2: N ≈ 5,10. The analysis of a real case comparing SAC and TD3 algorithms, required a sample size between N = 10 and N = 15 for a relatively strong effect = 0.93 when comparing the means, and about 5 more seeds when comparing the medians (= 0.80). Small sample sizes like N = 5 would require 3 to 4 times larger effects. A word on multiple comparisons. When performing multiple comparisons (e.g. between different pairs of algorithms evaluated in the same setting), the probability to have at least one false positive increases linearly with the number of comparisons n c. This probability is called the Family-Wise Error Rate (FWER). To correct for this effect, one must apply corrections. The Bonferroni correction for instance adapts the confidence level α Bonf. = α/n c BID13. This ensures that the FWER stays below the initial α. Using this corrections makes each test more conservative and decreases its statistical power. Comparing full learning curves. Instead of only comparing the final performances of the two algorithms after T timesteps in the environment, we can compare performances along learning. This consists in performing a statistical comparison for every evaluation step. This might reveal differences in speed of convergence and can provide more robust comparisons. Further discussions on how this relates to the problem of multiple comparison is given in the supplementary materials. In , this paper advocates for the use of Welch's t-test with low confidence level (α < 0.05) to ensure a false positive rate below α * < 0.05. The sample size must be selected carefully depending on the expected relative effect size. It also warns against the use of other unreliable tests, such as the bootstrap test (for N < 50), the Mann-Whitney and the ranked t-test (unless assumptions are carefully checked), or the permutation test (for N < 10). Using the t-test or the Welch's t-test with small sample sizes (<5) usually leads to high false positive rate and would require very large relative effect sizes (over = 2) to show good statistical power. Sample sizes above N = 20 generally meet the requirement of a 0.8 statistical power for a relative effect size = 1. Algorithm 1 represents the pseudo-code of the experiment. The whole code can be found at https: //github.com/ccolas/rl_stats. distributions refers to a list of pairs of distributions. When comparing tests for an equal distribution setting, the pairs represent twice the same type of distribution. When comparing for an unequal variance setting, the standard deviation of the second distribution is doubled. The number of repetitions is set to 10.000. The rejection variable refers to the rejection of the null hypothesis. The false positive error rates can be found in _array [for i_t, test in tests do 5: for i_e, effect_size in effect_sizes do 6: for i_ss, N in sample_sizes do for i_r = 1: nb_repets do 9: distrib BID0 .shift(effect) 10: sample1 = distrib.sample(N) 11: sample2 = distrib BID0.sample(N) 12: rejection_list.append(test.test(sample1, sample2, α)) 13: _array[i_d, i_t, i_e, i_ss] = mean(rejection_list) The correction to apply when comparing two learning curves depends 1) on the number of comparisons, 2) on the criteria that is used to conclude whether an algorithm is better than the other. The criteria used to draw a must be decided before running any test. An example can be: if when comparing the last 100 performance measures of the two algorithms, more than 50 comparisons show a significant difference, then Algorithm 1 is better than Algorithm 2. In that case, the number of comparisons performed is N c = 100, and the criterion is N rejection > N crit = 50. We want to constrain the probability FWER that our criterion is met by pure chance to a confidence level α=0.05. This probability is: FWER = α×N c /N crit. To make it satisfy FWER = 0.05, we need to correct α such as α corrected = α×N crit /N c (α corrected = α/2 in our case). Table 6: Statistical power when comparing samples from two bimodal distribution with different standard deviation. The first is centered in 0 (µ 1 = 0, σ 1 = 1 mean or median depending on the test), the other shifted by the relative effect size (µ 2 = σ pool, σ 2 = 2). Both have same standard deviation σ 1 = σ 2 = 1. Each represents the percentage of true positive over 10.000 repetitions. In bold are satisfying a true positive rate above 0.8. | This paper compares statistical tests for RL comparisons (false positive, statistical power), checks robustness to assumptions using simulated distributions and empirical distributions (SAC, TD3), provides guidelines for RL students and researchers. | 1,248 | scitldr |
In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation? In this paper, we set out to answer these questions by studying multiple phase transitions in the IB objective: IB_β[p(z|x)] = I(X; Z) − βI(Y; Z) defined on the encoding distribution p(z|x) for input X, target Y and representation Z, where sudden jumps of dI(Y; Z)/dβ and prediction accuracy are observed with increasing β. We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes. Using second-order calculus of variations, we derive a formula that provides a practical condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models. We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between X and Y orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings. Based on the theory, we present an algorithm for discovering phase transition points. Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10. The Information Bottleneck (IB) objective : explicitly trades off model compression (I(X; Z), I(·; ·) denoting mutual information) with predictive performance (I(Y ; Z)) using the Lagrange multiplier β, where X, Y are observed random variables, and Z is a learned representation of X. The IB method has proved effective in a variety of scenarios, including improving the robustness against adversarial attacks , learning invariant and disentangled representations (a; b), underlying information-based geometric clustering (b), improving the training and performance in adversarial learning , and facilitating skill discovery and learning goal-conditioned policy in reinforcement learning. From Eq. we see that when β → 0 it will encourage I(X; Z) = 0 which leads to a trivial representation Z that is independent of X, while when β → +∞, it reduces to a maximum likelihood objective 1 that does not constrain the information flow. Between these two extremes, how will the IB objective behave? Will prediction and compression performance change smoothly, or do there exist interesting transitions in between? , the authors observe and study the learnability transition, i.e. the β value such that the IB objective transitions from a trivial global minimum to learning a nontrivial representation. They also show how this first phase transition relates to the structure of the dataset. However, to answer the full question, we need to consider the full range of β. Motivation. To get a sense of how I(Y ; Z) and I(X; Z) vary with β, we train Variational Information Bottleneck (VIB) models on the CIFAR10 dataset , where each experiment is at a different β and random initialization of the model. Fig. 1 shows the I(X; Z), I(Y ; Z) and accuracy vs. β, as well as I(Y ; Z) vs. I(X; Z) for CIFAR10 with 20% label noise (see Appendix I for details). are discontinuous and the accuracy has discrete jumps. The observation lets us refine our question: When do the phase transitions occur, and how do they depend on the structure of the dataset? These questions are important, since answering them will help us gain a better understanding of the IB objective and its close interplay with the dataset and the learned representation. Moreover, the IB objective belongs to a general form of two-term trade-offs in many machine learning objectives: L = Prediction-loss + β · Complexity, where the complexity term generally takes the form of regularization. Usually, learning is set at a specific β. Many more insights can be gained if we understand the behavior of the prediction loss and model complexity with varying β, and how they depend on the dataset. The techniques developed to address the question in the IB setting may also help us understand the two-term tradeoff in other learning objectives. Contributions. In this work, we begin to address the above question in IB settings. Specifically: • We identify a qualitative change of the IB loss landscape w.r.t. p(z|x) for varying β as IB phase transitions (Section 3). • Based on the definition, we introduce a quantity G[p(z|x)] and use it to prove a theorem giving a practical condition for IB phase transitions. We further reveal the connection between G[p(z|x)] and the Fisher information matrix when p(z|x) is parameterized by θ (Section 3). • We reveal the close interplay between the IB objective, the dataset and the learned representation, by showing that in IB, each phase transition corresponds to learning a new nonlinear component of maximum correlation between X and Y, orthogonal to the previously-learned Z, and each with decreasing strength (Section 4). To the best of our knowledge, our work provides the first theoretical formula to address IB phase transitions in the most general setting. In addition, we present an algorithm for iteratively finding the IB phase transition points (Section 5). We show that our theory and algorithm give tight matches with the observed phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent transitions in CIFAR10 experiments (Section 6). The Information Bottleneck Method provides a tabular method based on the Blahut-Arimoto (BA) Algorithm to numerically solve the IB functional for the optimal encoder distribution P (Z|X), given the trade-off parameter β and the cardinality of the representation variable Z. This work has been extended in a variety of directions, including to the case where all three variables X, Y, Z are multivariate Gaussians , cases of variational bounds on the IB and related functionals for amortized learning (; a;), and a more generalized interpretation of the constraint on model complexity as a Kolmogorov Structure Function . Previous theoretical analyses of IB include , which looks at IB through the lens of copula functions, and , which starts to tackle the question of how to bound generalization with IB. We will make practical use of the original IB algorithm, as well as the amortized bounds of the Variational Informormation Bottleneck and the Conditional Entropy Bottleneck . Phase transitions, where key quantities change discontinuously with varying relative strength in the two-term trade-off, have been observed in many different learning domains, for multiple learning objectives. , the authors observe phase transitions in the latent representation of β-VAE for varying β. Strouse & Schwab (2017b) utilize the kink angle of the phase transitions in the Deterministic Information Bottleneck (DIB) (a) to determine the optimal number of clusters for geometric clustering. explicitly considers critical points in binary classification tasks using a discrete information bottleneck with a non-convex Pareto-optimal frontier. In Achille & Soatto (2018a) x, and Σ x is the covariance matrix. This work provides valuable insights for IB, but is limited to the special case that X, Y and Z are jointly Gaussian. Phase transitions in the general IB setting have also been observed, which describes as "information bifurcation". , the authors study the first phase transition, i.e. the learnability phase transition, and provide insights on how the learnability depends on the dataset. Our work is the first work that addresses all the IB phase transitions in the most general setting, and provides theoretical insights on the interplay between the IB objective, its phase transitions, the dataset, and the learned representation. 3 FORMULA FOR IB PHASE TRANSITIONS 3.1 DEFINITIONS Let X ∈ X, Y ∈ Y, Z ∈ Z be random variables denoting the input, target and representation, respectively, having a joint probability distribution p(X, Y, Z), with X × Y × Z its support. X, Y and Z satisfy the Markov chain Z − X − Y, i.e. Y and Z are conditionally independent given X. We assume that the integral (or summing if X, Y or Z are discrete random variables) is on X × Y × Z. We use x, y and z to denote the instances of the respective random variables. The above settings are used throughout the paper. We can view the IB objective IB β [p(z|x)] (Eq. 1) as a functional of the encoding distribution p(z|x). To prepare for the introduction of IB phase transitions, we first define relative perturbation function and second variation, as follows. Definition 1. Relative perturbation function: For p(z|x), its relative perturbation function r(z|x) is a bounded function that maps X × Z to R and satisfies E z∼p(z|x) [r(z|x)] = 0. Formally, define We have that r(z|x) ∈ Q Z|X iff r(z|x) is a relative perturbation function of p(z|x). The perturbed probability (density) is p (z|x) = p(z|x) (1 + · r(z|x)) for some > 0. Definition 2. Second variation: Let functional F [f (x)] be defined on some normed linear space R. Let us add a perturbative function · h(x) to f (x), and now the functional F [f (x) + · h(x)] can be expanded as is a linear functional of ·h(x), and is called the first variation, denoted as δF is a quadratic functional of · h(x), and is called the second variation, denoted as δ We can think of the perturbation function · h(x) as an infinite-dimensional "vector" (x being the indices), with being its amplitude and h(x) its direction. Here β + and 0 − denote one-sided limits. We can understand the δ 2 IB β [p(z|x)] as a local "curvature" of the IB objective IB β (Eq. 1) w.r.t. p(z|x), along some relative perturbation r(z|x). A phase transition occurs when the convexity of IB β [p(z|x)] w.r.t. p(z|x) changes from a minimum to a saddle point in the neighborhood of its optimal solution p * β (z|x) as β increases from β c to β c + 0 +. This means that there exists a perturbation to go downhill and find a better minimum. We validate this definition empirically below. The definition for IB phase transition (Definition 3) indicates the important role δ 2 IB β [p(z|x)] plays on the optimal solution in providing the condition for phase transitions. To concretize it and prepare for a more practical condition for IB phase transitions, we expand IB β [p(z|x)(1 + · r(z|x))] to the second order of, giving: The proof is given in Appendix B, in which we also give Eq. for empirical estimation. Note that Lemma 0.1 is very general and can be applied to any p(z|x), not only at the optimal solution p * β (z|x). The Fisher Information matrix. In practice, the encoder p θ (z|x) is usually parameterized by some parameter vector θ = (θ 1, θ 2, ...θ k) T ∈ Θ, e.g. weights and biases in a neural net, where Θ is the parameter field. An infinitesimal change of θ ← θ + ∆θ induces a relative perturbation · r(z|x) ∆θ, from which we can compute the threshold function where are the conditional Fisher information matrix of θ for Z conditioned on X and Y, respectively. λ max is the largest eigenvalue of C −1 I Z|Y (θ) − I Z (θ) (C T) −1 with v max the corresponding eigenvector, where CC T is the Cholesky decomposition of the matrix I Z|X (θ) − I Z (θ), and v max is the eigenvector for λ max. The infimum is attained at ∆θ = (The proof is in appendix C. We see that for parameterized encoders p θ (z|x), each term of G[p(z|x)] in Eq. can be replaced by a bilinear form with the Fisher information matrix of the respective variables. Although this lemma is not required to understand the more general setting of Lemma 0.1, where the model is described in a functional space, Lemma 0.2 helps understand G[p(z|x)] for parameterized models, which permits directly linking the phase transitions to the model's parameters. Phase Transitions. Now we introduce Theorem 1 that gives a concrete and practical condition for IB phase transitions, which is the core of the paper: Theorem 1. The IB phase transition points {β c i} as defined in Definition 3 are given by the roots of the following equation: where We can understand Eq. as the condition when δ 2 IB β [p(z|x)] is about to be able to be negative at the optimal solution p * β (z|x) for a given β. The proof for Theorem 1 is given in Appendix D. In Section 4, we will analyze Theorem 1 in detail. In this section we set out to understand G[p(z|x)] as given by Eq. and the phase transition condition as given by Theorem 1, from the perspectives of Jensen's inequality and representational maximum correlation. The condition for IB phase transitions given by Theorem 1 involves which is in itself an optimization problem. We can understand using Jensen's inequality: The equality between A and B holds when the perturbation r(z|x) is constant w.r.t. x for any z; the equality between B and C holds when E x∼p(x|y,z) [r(z|x)] is constant w.r.t. y for any z. Therefore, the minimization of A−C B−C encourages the relative perturbation function r(z|x) to be as constant w.r.t. x as possible (minimizing intra-class difference), but as different w.r.t. different y as possible (maximizing inter-class difference), ing in a clustering of the values of r(z|x) for different examples x according to their class y. Because of this clustering property in classification problems, we conjecture that there are at most |Y| − 1 phase transitions, where |Y| is the number of classes, with each phase transition differentiating one or more classes. Under certain conditions we can further simplify G[p(z|x)] and gain a deeper understanding of it. Firstly, inspired by maximum correlation , we introduce two new concepts, representational maximum correlation and conditional maximum correlation, as follows. Definition 4. Given a joint distribution p(X, Y), and a representation Z satisfying the Markov chain Z − X − Y, the representational maximum correlation ρ r (X, Y ; Z) is defined as where The conditional maximum correlation ρ m (X, Y |Z) is defined as: where We prove the following Theorem 2, which expresses G[p(z|x)] in terms of representational maximum correlation and related quantities, with proof given in Appendix F. Z|X and Q Z|X satisfy:, then we have: (i) The representation maximum correlation and G: (ii) The representational maximum correlation and conditional maximum correlation: where z * = arg max z∈Z ρ m (X, Y |Z = z), and h * (x) is the optimal solution for the learn- (iv) For discrete X, Y and Z, we have where σ 2 (Z) is the second largest singular value of the matrix Q X,Y |Z:= Theorem 2 furthers our understanding of G[p(z|x)] and the phase transition condition (Theorem 1), which we elaborate as follows. Discovering maximum correlation in the orthogonal space of a learned representation: Intuitively, the representational maximum correlation measures the maximum linear correlation between f (X, Z) and g(Y, Z) among all real-valued functions f, g, under the constraint that f (X, Z) is "orthogonal" to p(X|Z) and is the inverse square of this representational maximum correlation. Theorem 2 (ii) further shows that G[p(z|x)] is finding a specific z * on which maximum (nonlinear) correlation between X and Y 2 For discrete X, Z such that the cardinality |Z| ≥ |X |, this is generally true since in this scenario, h(x, z) and s(z) have |X ||Z| + |Z| unknown variables, but the condition has only |X ||Z| + |X | linear equations. The difference between Q Z|X and Q conditioned on Z can be found. Combined with Theorem 1, we have that when we continuously increase β, for the optimal representation Z * β given by p * β (z|x) at β, ρ r (X, Y ; Z * β) shall monotonically decrease due to that X and Y has to find their maximum correlation on the orthogonal space of an increasingly better representation Z * β that captures more information about X. A phase transition occurs when ρ r (X, Y ; Z * β) reduces to, after which as β continues to increase, ρ r (X, Y ; Z * β) will try to find maximum correlation between X and Y orthogonal to the full previously learned representation. This is reminiscent of canonical-correlation analysis (CCA) in linear settings, where components with decreasing linear maximum correlation that are orthogonal to previous components are found one by one. In comparison, we show that in IB, each phase transition corresponds to learning a new nonlinear component of maximum correlation between X and Y in Z, orthogonal to the previously-learned Z. In the case of classification where different classes may have different difficulty (e.g. due to label noise or support overlap), we should expect that classes that are less difficult as measured by a larger maximum correlation between X and Y are learned earlier. Conspicuous subset conditioned on a single z: Furthermore, we show in (iii) that an optimal relative perturbation function r(z|x) can be decomposed into a product of two factors, a factor that only focus on perturbing a specific point z * in the representation space, and an h * (x) factor that is finding the "conspicuous subset" , i.e. the most confident, large, typical, and imbalanced subset in the X space for the distribution Singular values In categorical settings, (iv) reveals a connection between G[p(z|x)] and the singular value of the Q X,Y |Z matrix. Due to the property of SVD, we know that the square of the singular values of Q X,Y |Z equals the non-negative eigenvalue of the matrix Q T X,Y |Z Q X,Y |Z. Then the phase transition condition in Theorem 1 is equivalent to a (nonlinear) eigenvalue problem. This is resonant with previous analogy with CCA in linear settings, and is also reminiscent of the linear eigenvalue problem in Gaussian IB . As a consequence of the theoretical analysis above, we are able to derive an algorithm to efficiently estimate the phase transitions for a given model architecture and dataset. This algorithm also permits us to empirically confirm some of our theoretical in Section 6. Typically, classification involves high-dimensional inputs X. Without sweeping the full range of β where at each β it is a full learning problem, it is in general a difficult task to estimate the phase transitions. In Algorithm 1, we present a two-stage approach. In the first stage, we train a single maximum likelihood neural network f θ with the same encoder architecture as in the (variational) IB to estimate p(y|x), and obtain an N × C matrix p(y|x), where N is the number of examples in the dataset and C is the number of classes. In the second stage, we perform an iterative algorithm w.r.t. G and β, alternatively, to converge to a phase transition point. Specifically, for a given β, we use a Blahut-Arimoto type IB algorithm to efficiently reach IB optimal p * β (z|x) at β, then use SVD (with the formula given in Theorem 2 (iv)) to efficiently estimate G[p * β (z|x)] at β (step 8). We then use the G[p * β (z|x)] value as the new β and do it again (step 7 in the next iteration). At convergence, we will reach the phase transition point given by G[p * β (z|x)] = β (Theorem 1). After convergence as measured by patience parameter K, we slightly increase β by δ (step 13), so that the algorithm can discover the subsequent phase transitions. We quantitatively and qualitatively test the ability of our theory and Algorithm 1 to provide good predictions for IB phase transitions. We first verify them in fully categorical settings, where X, Y, Z are all discrete, and we show that the phase transitions can correspond to learning new classes as we increase β. We then test our algorithm on versions of the MNIST and CIFAR10 datasets with added label noise. 8: 6.1 CATEGORICAL DATASET For categorical datasets, X and Y are discrete, and p(X) and p(Y |X) are given. To test Theorem 1, we use the Blahut-Arimoto IB algorithm to compute the optimal p * β (z|x) for each β. I(Y ; Z *) vs. β is plotted in Fig. 2 (a). There are two phase transitions at β Moreover, starting at β = 1, Alg. 1 converges to each phase transition points within few iterations. Our other experiments with random categorical datasets show similarly tight matches. Furthermore, in Appendix G we show that the phase transitions correspond to the onset of separation of p(z|x) for subsets of X that correspond to different classes. This supports our conjecture from Section 4.1 that there are at most |Y| − 1 phase transitions in classification problems. For continuous X, how does our algorithm perform, and will it reveal aspects of the dataset? We first test our algorithm in a 4-class MNIST with noisy labels 3, whose confusion matrix and experimental settings are given in Appendix H. Fig. 3 (a) shows the path Alg. 1 takes. We see again that in each Figure 3: (a) Path of Alg. 1 starting with β = 1, where the maximum likelihood model f θ is using the same encoder architecture as in the CEB model. This stairstep path shows that Alg. 1 is able to ignore very large regions of β, while quickly and precisely finding the phase transition points. Also plotted is an accumulation of G[p * β (z|x)] vs. β by running Alg. 1 with varying starting β (blue dots). (b) Per-class accuracy vs. β, where the accuracy at each β is from training an independent CEB model on the dataset. The per-class accuracy denotes the fraction of correctly predicted labels by the CEB model for the observed labelỹ. phase Alg. 1 converges to the phase transition points within a few iterations, and it discovers in total 3 phase transition points. Similar to the categorical case, we expect that each phase transition corresponds to the onset of learning a new class, and that the last class is much harder to learn due to a larger separation of β. Therefore, this class should have a much larger label noise so that it is hard to capture this component of maximum correlation between X and Y, as analyzed in representational maximum correlation (Section 4.2). Fig. 3 (b) plots the per-class accuracy with increasing β for running the Conditional Entropy Bottleneck (another variational bound on IB). We see that the first two predicted phase transition points β c 0, β c 1 closely match the observed onset of learning class 3 and class 0. Class 1 is observed to learn earlier than expected, possibly due to the gap between the variational IB objective and the true IB objective in continuous settings. By looking at the confusion matrix for the label noise (Fig. 7), we see that the ordering of onset of learning: class 2, 3, 0, 1, corresponds exactly to the decreasing diagonal element p(ỹ = 1|y = 1) (increasing noise) of the classes, and as predicted, class 1 has a much smaller diagonal element p(ỹ = 1|y = 1) than the other three classes, which makes it much more difficult to learn. This ordering of classes by difficulty is what our representational maximum correlation predicts. The most interesting region is right before β = 2, where accuracy decreases with β. Alg. 1 identifies both sides of that region, as well as points at or near all of the early obvious phase transitions. It also seems to miss later transitions, possibly due to the gap between the variational IB objective and the true IB objective in continuous settings. Finally, we investigate the CIFAR10 experiment from Section 1. The details of the experimental setup are described in Appendix I. This experiment stretches the current limits of our discrete approximation to the underlying continuous representation being learned by the models. Nevertheless, we can see in Fig. 4 that many of the visible empirical phase transitions are tightly identified by Alg. 1. Particularly, the onset of learning is predicted quite accurately; the large interval between the predicted β 3 = 1.21 and β 4 = 1.61 corresponds well to the continuous increase of I(X; Z) and I(Y ; Z) at the same interval. And Alg. 1 is able to identify many dense transitions not obviously seen by just looking at I(Y ; Z) vs. β curve alone. Alg. 1 predicts 9 phase transitions, exactly equal to |Y| − 1 for CIFAR10. In this work, we observe and study the phase transitions in IB as we vary β. We introduce the definition for IB phase transitions, and based on it derive a formula that gives a practical condition for IB phase transitions. We further understand the formula via Jensen's inequality and representational maximum correlation. We reveal the close interplay between the IB objective, the dataset and the learned representation, as each phase transition is learning a nonlinear maximum correlation component in the orthogonal space of the learned representation. We present an algorithm for finding the phase transitions, and show that it gives tight matches with observed phase transitions in categorical datasets, predicts onset of learning new classes and class difficulty in MNIST, and predicts prominent transitions in CIFAR10 experiments. This work is a first theoretical step towards a deeper understanding of the phenomenon of phase transitions in the Information Bottleneck. We believe our approach will be applicable to other "trade-off" objectives, like β-VAE and InfoDropout (a), where the model's ability to predict is balanced against a measure of complexity. Here we prove the Lemma 2.1, which will be crucial in the lemmas and theorems in this paper that follows. Lemma 2.1. For a relative perturbation function r(z|x) ∈ Q Z|X for a p(z|x), where r(z|x) satisfies E z∼p(z|x) [r(z|x)] = 0, we have that the IB objective can be expanded as Proof. Suppose that we perform a relative perturbation r(z|x) on p(z|x) such that the perturbed conditional probability is p (z|x) = p(z|x) (1 + · r(z|x)), then we have Therefore, we can denote the corresponding relative perturbation r(z) on p(z) as Similarly, we have And we can denote the corresponding relative perturbation r(z|y) on p(z|y) as We have The 0 th -order term is simply IB β [p(z|x)]. The first order term is The n th -order term for n ≥ 2 is In the last equality we have used Combining the terms with all orders, we have As a side note, the KL-divergence between p (z|x) = p(z|x)(1 + · r(z|x)) and p(z|x) is Therefore, to the second order, we have Similarly, we have up to the second order. Using similar procedure, we have up to the second-order, B PROOF OF LEMMA 0.1 Proof. From Lemma 2.1, we have The condition of is equivalent to Using Jensen's inequality and the convexity of the square function, we have The equality holds iff r(z|y) = E x∼p(x|y,z) [r(z|x)] is constant w.r.t. y, for any z., where the equality holds iff r(z|x) is constant w.r.t. x for any z. where r(z|y) = E x∼p(x|y,z) [r(z|x)] and r(z) = E x∼p(x|z) [r(z|x)]. which is always true due to that E[r 2 (z|x)] ≥ E[r 2 (z)], and will be a looser condition than Eq. above. Above all, we have Eq.. To empirically estimate G[p(z|x)] from a minibatch of {(x i, y i)}, i = 1, 2,...N and the encoder p(z|x), we can make the following Monte Carlo importance sampling estimation, where we use the samples {x j} ∼ p(x) and also get samples of {z i} ∼ p(z) = p(x)p(z|x), and have: Here Ω x (y i) denotes the set of x examples that has label of y i, and 1[·] is an indicator function that takes value 1 if its argument is true, 0 otherwise. for any x j. Combining all terms, we have that the empiricalĜ[p(z|x)] is given bŷ where {z i} ∼ p(z) and {x i} ∼ p(x). It is also possible to use different distributions for importance sampling, which will in different formulas for empirical estimation of Proof. For the parameterized 4 p θ (z|x) with θ ∈ Θ, after θ ← θ + ∆θ, where 5 ∆θ ∈ Θ is an infinitesimal perturbation on θ, we have that the distribution changes from p θ (z|x) to p θ+∆θ (z|x), 4 In this paper, θ = (θ1, θ2, ...θ k) T and,... is a k × k matrix with (i, j) element of ∂θ i ∂θ j. 5 Note that since Θ is a field, it is closed under subtraction, we have ∆θ ∈ Θ. ∂θ 2 1 = 0, and similarly E y,z∼p θ (y,z) [In other words, the ∆θ 2 terms in the first-order variation δIB β [p θ (z|x)] vanish, and the remaining ∆θ 2 are all Similarly, we have Combining the continuity of T β (β) at β = β, and Eq. and Proof. When we r(z|x) is shifted by a global transformation r (z|x) ← r(z|x) + s(z), we have, and similarly r (z|y) ← r(z|y) + s(z). The numerator of G[r(z|x); p(z|x)] is then Proof. Using the condition of the theorem, we have that ∀r(z|x) ∈ Q 0 Z|X, there exists r 1 (z|x) ∈ Q Z|X and s(z) ∈ {s : Z → R|s bounded} s.t. r(z|x) = r 1 (z|x) + s(z). Note that the only difference between Q Z|X and Q Z|X is that Q Z|X requires E p(z|x) [r 1 (z|x)] = 0. Using Lemma 2.2, we have where r(z|x) doesn't have the constraint of E p(z|x) [·] = 0. After dropping the constraint of E z∼p(z|x) [r(z|x)] = 0, again using Lemma 2.2, we can let r(z) = E x∼p(x|z) [r(z|x)] = 0 (since we can perform the transformation r (z|x) ← r(z|x) − r(z), so that the new r (z) ≡ 0). Now we get a simpler formula for G[p(z|x)], as follows: where Q From Eq., we can further require that We have where F (y, z):= dxp(x|y, z)f (x, z). We have used Cauchy-Schwarz inequality, where the equality holds when g(y, z) = αF (y, z) for some α. Comparing Eq. and the supremum: we see that the only difference is that in the latter Therefore, where in the last equality we have let c(z) have "mass" only on the place where ρ 2 m (X, Y |Z = z) attains supremum w.r.t. z. Z|X, satisfying the requirement for ρ s (X, Y ; Z) (which equals ρ r (X, Y ; Z) by Eq. 28). In this section we study the behavior of p(z|x) on the phase transitions. We use the same categorical dataset (where |X| = |Y | = |Z| = 3 and p(x) is uniform, and p(y|x) is given in Fig. 5). In Fig. 6 we show the p(z|x) on the simplex before and after each phase transition. We see that the first phase transition corresponds to the separation of x = 2 (belonging to y = 2) w.r.t. x ∈ {0, 1} (belonging to classes y ∈ {0, 1}), on the p(z|x) simplex. The second phase transition corresponds to the separation of x = 0 with x = 1. Therefore, each phase transition corresponds to the ability to distinguish subset of examples, and learning of new classes. We use the MNIST training examples with class 0, 1, 2, 3, with a hidden label-noise matrix as given in Fig. 7, based on which at each minibatch we dynamically sample the observed label. We use conditional entropy bottleneck (CEB) as the variational IB objective, and run multiple independent instances with different the target β. We jump start learning by started training at β = 100 for 100 epochs, annealing β from 100 down to the target β over 600 epochs, and continue to train at the target epoch for another 800 epochs. The encoder is a three-layer neural net, where each hidden layer has 512 neurons and leakyReLU activation, and the last layer has linear activation. The classifier p(y|z) is a 2-layer neural net with a 128-neuron ReLU hidden layer. The backward encoder p(z|y) is also a 2-layer neural net with a 128-neuron ReLU hidden layer. We trained with Adam at learning rate of 10 −3, and anneal down with factor 1/(1 + 0.01 · epoch). For Alg. 1, for the f θ we use the same architecture as the encoder of CEB, and use |Z| = 50 in Alg. 1. We use the same CIFAR10 class confusion matrix provided in to generate noisy labels with about 20% label noise on average (reproduced in Table 1). We trained 28 × 1 Wide ResNet models using the open source implementation from as encoders for the Variational Information Bottleneck (VIB) . The 10 dimensional output of the encoder parameterized a mean-field Gaussian with unit covariance. Samples from the encoder were passed to the classifier, a 2 layer MLP. The marginal distributions were mixtures of 500 fully covariate 10-dimensional Gaussians, all parameters of which are trained. With this standard model, we trained 251 different models at β from 1.0 to 6.0 with step size of 0.02. As in , we jump-start learning by annealing β from 100 down to the target β. We do this over the first 4000 steps of training. The models continued to train for another 56,000 gradient steps after that, a total of 600 epochs. We trained with Adam at a base learning rate of 10 −3, and reduced the learning rate by a factor of 0.5 at 300, 400, and 500 epochs. The models converged to essentially their final accuracy within 40,000 gradient steps, and then remained stable. Figure 5: p(y|x) for the categorical dataset in Fig. 2 and Fig. 6. The value in i th row and j th column denotes p(y = j|x = i). p(x) is uniform. The accuracies reported in Figure 4 are averaged across five passes over the training set. We use |Z| = 50 in Alg. 1. | We give a theoretical analysis of the Information Bottleneck objective to understand and predict observed phase transitions in the prediction vs. compression tradeoff. | 1,249 | scitldr |
We propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks. The local adaptation of activation function is achieved by introducing scalable hyper-parameters in each layer (layer-wise) and for every neuron separately (neuron-wise), and then optimizing it using the stochastic gradient descent algorithm. Introduction of neuron-wise activation function acts like a vector activation function as opposed to the traditional scalar activation function given by fixed, global and layer-wise activations. In order to further increase the training speed, an activation slope based slope recovery term is added in the loss function, which further accelerate convergence, thereby reducing the training cost. For numerical experiments, a nonlinear discontinuous function is approximated using a deep neural network with layer-wise and neuron-wise locally adaptive activation functions with and without the slope recovery term and compared with its global counterpart. Moreover, solution of the nonlinear Burgers equation, which exhibits steep gradients, is also obtained using the proposed methods. On the theoretical side, we prove that in the proposed method the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate. Furthermore, the proposed adaptive activation functions with the slope recovery are shown to accelerate the training process in standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets with and without data augmentation. In recent years, research on neural networks (NNs) has intensified around the world due to their successful applications in many diverse fields such as speech recognition, computer vision , natural language translation , etc. Training of NN is performed on data sets before using it in the actual applications. Various data sets are available for applications like image classification, which is a subset of computer vision. MNIST and their variants like, Fashion-MNIST , and KMNIST are the data sets for handwritten digits, images of clothing and accessories, and Japanese letters, respectively. Apart from MNIST, Semeion is a handwritten digit data set that contains 1593 digits collected from 80 persons. SVHN is another data set for street view house numbers obtained from house numbers in Google Street View images. CI-FAR is the popular data set containing color images commonly used to train machine learning algorithms. In particular, the CIFAR-10 data set contains 50000 training and 10000 testing images in 10 classes with image resolution of 32x32. CIFAR-100 is similar to the CIFAR-10, except it has 100 classes with 600 images in each class, which is more challenging than the CIFAR-10 data set. problems, where the approximate solutions of governing equations are obtained, as well as inverse problems, where parameters involved in the governing equation are inferred from the training data. Highly efficient and adaptable algorithms are important to design the most effective NN which not only increases the accuracy of the solution but also reduces the training cost. Various architectures of NN like Dropout NN are proposed in the literature, which can improve the efficiency of the algorithm for specific applications. Activation function plays an important role in the training process of NN. In this work, we are particularly focusing on adaptive activation functions, which adapt automatically such that the network can be trained faster. Various methods are proposed in the literature for adaptive activation function, like the adaptive sigmoidal activation function proposed by for multilayer feedforward NNs, while focuses on learning activation functions in convolutional NNs by combining basic activation functions in a data-driven way. Multiple activation functions per neuron are proposed , where individual neurons select between a multitude of activation functions. proposed a tunable activation function, where only a single hidden layer is used and the activation function is tuned. , used a similar idea of tunable activation function but with multiple outputs. Recently, Kunc and Kléma proposed a transformative adaptive activation functions for gene expression inference, see (Kunc & Kléma, 2019). One such adaptive activation function is proposed by introducing scalable hyper-parameter in the activation function, which can be optimized. Mathematically, it changes the slope of activation function thereby increasing the learning process, especially during the initial training period. Due to single scalar hyper-parameter, we call such adaptive activation functions globally adaptive activations, meaning that it gives an optimized slope for the entire network. One can think of doing such optimization at the local level, where the scalable hyper-parameter are introduced hidden layer-wise or even for each neuron in the network. Such local adaptation can further improve the performance of the network. Figure 1 shows a sketch of a neuron-wise locally adaptive activation function based physics-informed neural network (LAAF-PINN), where both the NN part along with the physicsinformed part can be seen. In this architecture, along with the output of NN and the residual term from the governing equation, the activation slopes from every neuron are also contributing to the loss function in the form of slope recovery term. The rest of the paper is organized as follows. Section 2 presents the methodology of the proposed layer-wise and neuron-wise locally adaptive activations in detail. This also includes a discussion on the slope recovery term, expansion of parametric space due to layer-wise and neuron-wise introduction of hyper-parameters, its effect on the overall training cost, and a theoretical for gradient decent algorithms. Section 3 gives numerical experiments, where we approximate a nonlinear discontinuous function using deep NN by the proposed approaches. We also solve the Burgers equation using the proposed algorithm and present various comparisons in appendix B. Section 4 presents numerical with various standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets. Finally, in section 5, we summarize the of our work. We use a NN of depth D corresponding to a network with an input layer, D − 1 hidden-layers and an output layer. In the k th hidden-layer, N k number of neurons are present. Each hidden-layer of the network receives an output z k−1 ∈ R N k−1 from the previous layer where an affine transformation of the form is performed. The network weights w k ∈ R N k ×N k−1 and bias term b k ∈ R N k associated with the k th layer are chosen from independent and identically distributed sampling. The nonlinearactivation function σ(·) is applied to each component of the transformed vector before sending it as an input to the next layer. The activation function is an identity function after an output layer. Thus, the final neural network representation is given by the composition where the operator • is the composition operator, represents the trainable parameters in the network, u is the output and z 0 = z is the input. In supervised learning of solution of PDEs, the training data is important to train the neural network, which can be obtained from the exact solution (if available) or from high-resolution numerical solution given by efficient numerical schemes and it can be even obtained from carefully performed experiments, which may yield both high-and low-fidelity data sets. We aim to find the optimal weights for which the suitably defined loss function is minimized. In PINN the loss function is defined as where the mean squared error (MSE) is given by Here {x represents the residual training points in space-time domain, while {x represents the boundary/initial training data. The neural network solution must satisfy the governing equation at randomly chosen points in the domain, which constitutes the physicsinformed part of neural network given by first term, whereas the second term includes the known boundary/initial conditions, which must be satisfied by the neural network solution. The ing optimization problem leads to finding the minimum of a loss function by optimizing the parameters like, weights and biases, i.e., we seek to find w *, b * = arg min w,b∈Θ (J(w, b)). One can approximate the solutions to this minimization problem iteratively by one of the forms of gradient descent algorithm. The stochastic gradient descent (SGD) algorithm is widely used in machine learning community see, for a complete survey. In SGD the weights are updated as, where η l > 0 is the learning rate. SGD methods can be initialized with some starting value w 0. In this work, the ADAM optimizer , which is a variant of the SGD method is used. A deep network is required to solve complex problems, which on the other hand is difficult to train. In most cases, a suitable architecture is selected based on the researcher's experience. One can also think of tuning the network to get the best performance out of it. In this regard, we propose the following two approaches to optimize the adaptive activation function. Instead of globally defining the hyper-parameter a for the adaptive activation function, let us define this parameter hidden layer-wise as This gives additional D − 1 hyper-parameters to be optimized along with weights and biases. Here, every hidden-layer has its own slope for the activation function. One can also define such activation function at the neuron level as This gives additional k=1 N k hyper-parameters to be optimized. Neuron-wise activation function acts as a vector activation function as opposed to scalar activation function given by L-LAAF and global adaptive activation function (GAAF) approaches, where every neuron has its own slope for the activation function. The ing optimization problem leads to finding the minimum of a loss function by optimizing a k i along with weights and biases, i.e., we seek to find (a The process of training NN can be further accelerated by multiplying a with scaling factor n > 1. The final form of the activation function is given by σ(na It is important to note that the introduction of the scalable hyper-parameter does not change the structure of the loss function defined previously. Then, the final adaptive activation function based neural network representation of the solution is given by In this case, the set of trainable parametersΘ consists of {w and {a In all the proposed methods, the initialization of scalable hyper-parameters is done such that na k i = 1, ∀n. The main motivation of adaptive activation function is to increase the slope of activation function, ing in non-vanishing gradients and fast training of the network. It is clear that one should quickly increase the slope of activation in order to improve the performance of NN. Thus, instead of only depending on the optimization methods, another way to achieve this is to include the slope recovery term based on the activation slope in the loss function as where the slope recovery term S(a) is given by where N is a linear/nonlinear operator. Although, there can be several choices of this operator, including the linear identity operator, in this work we use the exponential operator. The main reason behind this is that such term contributes to the gradient of the loss function without vanishing. The overall effect of inclusion of this term is that it forces the network to increase the value of activation slope quickly thereby increasing the training speed. We now provide a theoretical regarding the proposed methods. The following theorem states that a gradient descent algorithm minimizing our objective functionJ(Θ) in equation 3 does not converge to a sub-optimal critical point or a sub-optimal local minimum, for neither L-LAAF nor N-LAAF, given appropriate initialization and learning rates. In the following theorem, we treatΘ as a real-valued vector. Let Jc = M SE F + M SE u with the constant network u Θ (z) = u Θ (z) = c ∈ R N D for all z, z where c is a constant. Theorem 2.1. Let (Θ m) m∈N be a sequence generated by a gradient descent algorithm asΘ m+1 = Θ m − η m ∇J(Θ). Assume that J(Θ 0) < Jc + S for any c ∈ R N D, J is differentiable, and that for each i ∈ {1, . . ., N f}, there exist differentiable function ϕ i and input. Assume that at least one of the following three conditions holds. (i) (constant learning rate) ∇J is Lipschitz continuous with Lipschitz constant C (i.e., ∇J(Θ) − ∇J(Θ) 2 ≤ C Θ −Θ 2 for allΘ,Θ in its domain), and ≤ η m ≤ (2 −)/C, where is a fixed positive number. (ii) (diminishing learning rate) ∇J is Lipschitz continuous, η m → 0 and ∞ m=0 η m = ∞. (iii) (adaptive learning rate) the learning rate η m is chosen by the minimization rule, the limited minimization rule, the Armjio rule, or the Goldstein rule . Then, for both L-LAAF and N-LAAF, no limit point of (Θ m) m∈N is a sub-optimal critical point or a sub-optimal local minimum. The initial condition J(Θ 0) < Jc + S means that the initial value J(Θ 0) needs to be less than that of a constant network plus the highest value of the slope recovery term. Here, note that S < S. The proof of Theorem 2.1 is included in appendix A. In this section, we shall solve a regression problem of a nonlinear function approximation using deep neural network. The Burgers equation using physics-informed neural network is solved in appendix B. In this test case, a standard neural network (without physics-informed part) is used to approximate a discontinuous function. In this case, the loss function consists of the data mismatch and the slope recovery term as The following discontinuous function with discontinuity at x = 0 location is approximated by a deep neural network. Here, the domain is [−3, 3] and the number of training points used is 300. The activation function is tanh, learning rate is 2.0e-4 and the number of hidden layers are four with 50 neurons in each layer. Figure 2 shows the solution (first column), solution in frequency domain (second column) and pointwise absolute error in log scale (third column). The solution by standard fixed activation function is given in the first row, GAAF solution is given in second row, whereas the third row shows the solution given by L-LAAF without and with (fourth row) slope recovery term. The solution given by N-LAAF without slope recovery term is shown in the fifth row and with slope recovery term in the sixth row. We see that the NN training speed increases for the locally adaptive activation functions compared to fixed and globally adaptive activations. Moreover, both L-LAAF and N-LAAF with slope recovery term accelerate training and yield the least error as compared to other methods. Figure 3 (top) shows the variation of na for GAAF, whereas the second row, left and right shows the layer-wise variation of na k for L-LAAF without and with the slope recovery term respectively. The third row, left and right shows the variation of scaled hyper-parameters for N-LAAF without and with the slope recovery term respectively, where the mean value of na k i along with its standard deviation (Std) are plotted for each hidden-layer. We see that the value of na is quite large with the slope recovery term which shows the rapid increase in the activation slopes. Finally, the comparison of the loss function is shown in figure 4 for standard fixed activation, GAAF, L-LAAF and N-LAAF without the slope recovery (left) and for L-LAAF and N-LAAF with the slope recovery (right) using a scaling factor of 10. The Loss function for both L-LAAF and N-LAAF without the slope recovery term decreases faster, especially during the initial training period compared to the fixed and global activation function based algorithms. The previous sections demonstrated the advantages of adaptive activation functions with PINN for physics related problems. One of the remaining questions is whether or not the advantage of adaptive activations remains with standard deep neural networks for other types of deep learning applications. To explore the question, this section presents numerical with various standard benchmark problems in deep learning. Figures 5 and 6 shows the mean values and the uncertainty intervals Figure 2: Discontinuous function: Neural network solution using standard fixed activation (first row), GAAF (second row), L-LAAF without (third row) and with (fourth row) slope recovery term, and N-LAAF without (fifth row) and with (sixth row) slope recovery term using the tanh activation. First column shows the solution which is also plotted in frequency domain (zoomed-view) as shown by the corresponding second column. Third column gives the point-wise absolute error in the log scale for all the cases. accelerates the minimization process of the training loss values. Here, all of GAAF, L-LAAF and N-LAAF use the slope recovery term, which improved the methods without the recovery term. Accordingly, the of GAAF are also new contributions of this paper. In general, L-LAAF improved against GAAF as expected. The standard cross entropy loss was used for training and plots. We used pre-activation ResNet with 18 layers for CIFAR-10, CIFAR-100, and SVHN data sets, whereas we used a standard variant of LeNet with ReLU for other data sets; i.e., the architecture of the variant of LeNet consists of the following five layers (with the three hidden layers): input layer, convolutional layer with 64 5 × 5 filters, followed by max pooling of size of 2 by 2 and ReLU, convolutional layer with 64 5 × 5 filters, followed by max pooling of size of 2 by 2 and ReLU, fully connected layer with 1014 output units, followed by ReLU, and Fully connected layer with the number of output units being equal to the number of target classes. All hyper-parameters were fixed a priori across all different data sets and models. We fixed the mini-batch size s to be 64, the initial learning rate to be 0.01, the momentum coefficient to be 0.9 and we use scaling factor n = 1 and 2. The learning rate was divided by 10 at the beginning of 10th epoch for all experiments (with and without data augmentation), and of 100th epoch for those with data augmentation. In this paper, we present two versions of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions. Such local activation functions further improve the training speed of the neural network compared to its global predecessor. To further accelerate the training process, an activation slope based slope recovery term is added in the loss function for both layer-wise and neuron-wise activation functions, which is shown to enhance the performance of the neural network. Various NN and PINN test cases like nonlinear discontinuous function approximation and Burgers equation respectively, and benchmark deep learning problems like MNIST, CIFAR, SVHN etc are solved to verify our claim. Moreover, we theoretically prove that no sub-optimal critical point or local minimum attracts gradient descent algorithms in the proposed methods (L-LAAF and N-LAAF) with the slope recovery term under only mild assumptions. k=1 is a limit point of (Θ m) m∈N and a sub-optimal critical point or a sub-optimal local minimum. and h Following the proofs in (, Propositions 1.2.1-1.2.4), we have that ∇J(Θ) = 0 and J(Θ) < Jc + S, for all three cases of the conditions corresponding the different rules of the learning rate. Therefore, we have that for all k ∈ {1, . . ., D − 1}, Furthermore, we have that for all k ∈ {1, . . ., D − 1} and all j ∈ {1, . . ., N k}, By combining equation 5-equation 7, for all k ∈ {1, . . ., D − 1}, which implies that for all a k = 0 since (D − 1) exp(a k) = 0. This implies that J(Θ) = Jc + S, which contradicts with J(Θ) < Jc + S. This proves the desired statement for L-LAAF. For N-LAAF, we prove the statement by contradiction. Suppose that the parameter vectorΘ consisting of {w k=1 ∀j = 1, 2, · · ·, N k is a limit point of (Θ m) m∈N and a suboptimal critical point or a sub-optimal local minimum. Redefine and h for all j ∈ {1, . . ., N k}, where w k,j ∈ R 1×N k−1 and b k,j ∈ R. Then, by the same proof steps, we have that ∇J(Θ) = 0 and J(Θ) < Jc + S, for all three cases of the conditions corresponding the different rules of the learning rate. Therefore, we have that for all k ∈ {1, . . ., D − 1} and all j ∈ {1, . . ., N k}, By combining equation 6-equation 8, for all k ∈ {1, . . ., D − 1} and all j ∈ {1, . . ., N k},, which implies that for all a This implies that J(Θ) = Jc + S, which contradicts with J(Θ) < Jc + S. This proves the desired statement for N-LAAF. The Burgers equation is one of the fundamental partial differential equation arising in various fields such as nonlinear acoustics, gas dynamics, fluid mechanics etc, see for more details. The Burgers equation was first introduced by H. and later studied by J.M. in the context of theory of turbulence. Here, we consider the viscous Burgers equation given by equation equation 9 along with its initial and boundary conditions. The non-linearity in the convection term develops very steep solution due to small˜ value. We consider the Burgers equation given by with initial condition u(x, 0) = − sin(πx), boundary conditions u(−1, t) = u(1, t) = 0 and˜ = 0.01/π. The analytical solution can be obtained using the Hopf-Cole transformation, see for more details. The number of boundary and initial training points is 400, whereas the number of residual training points is 10000. The activation function is tanh, learning rate is 0.0008 and the number of hidden layers are 6 with 20 neurons in each layer. Figure 7 shows the evolution of frequency plots of the solution at three different times using the standard fixed activation function (first row), global adaptive activation function (second row), L-LAAF without (third row) and with (fourth row) slope recovery term, N-LAAF without (fifth row) and with (sixth row) slope recovery term using scaling factor n = 10. Again, for both L-LAAF and N-LAAF the frequencies are converging faster towards the exact solution (shown by black line) with and without slope recovery term as compared to the fixed and global activation function based algorithms. decreases faster for all adaptive activations, in particular GAAF. Even though it is difficult to see from the actual solution plots given by figure 8, one can see from the Figure 10: Burgers equation: comparison of na k for L-LAAF for all six hidden-layers. First three columns represent for L-LAAF without slope recovery term whereas the last three columns are with slope recovery term. In all simulations, the scaling factor n is 10. Figure 10 shows the comparison of layer-wise variation of na k for L-LAAF with and without slope recovery term. It can be seen that, the presence of slope recovery term further increases the slope of activation function thereby increasing the training speed. Similarly, figure 11 shows the mean and standard deviation of na k i for N-LAAF with and without slope recovery term, which again validates that with slope recovery term network training speed increases. for N-LAAF for all six hidden-layers. First three columns represent resuls for N-LAAF without the slope recovery term whereas the last three columns are with slope recovery term. In all simulations, the scaling factor n is 10. | Proposing locally adaptive activation functions in deep and physics-informed neural networks for faster convergence | 1,250 | scitldr |
Learning in Gaussian Process models occurs through the adaptation of hyperparameters of the mean and the covariance function. The classical approach entails maximizing the marginal likelihood yielding fixed point estimates (an approach called Type II maximum likelihood or ML-II). An alternative learning procedure is to infer the posterior over hyperparameters in a hierarchical specification of GPs we call Fully Bayesian Gaussian Process Regression (GPR). This work considers two approximations to the intractable hyperparameter posterior, 1) Hamiltonian Monte Carlo (HMC) yielding a sampling based approximation and 2) Variational Inference (VI) where the posterior over hyperparameters is approximated by a factorized Gaussian (mean-field) or a full rank Gaussian accounting for correlations between hyperparameters. We analyse the predictive performance for fully Bayesian GPR on a range of benchmark data sets. The Gaussian process (GP) posterior is heavily influenced by the choice of the covariance function which needs to be set a priori. Specification of a covariance function and setting the hyperparameters of the chosen covariance family are jointly referred to as the model selection problem . A preponderance of literature on GPs address model selection through maximization of the marginal likelihood, ML-II . This is an attractive approach as the marginal likelihood is tractable in the case of a Gaussian noise model. Once the point estimate hyperparameters have been selected typically using conjugate gradient methods the posterior distribution over latent function values and hence predictions can be derived in closed form; a compelling property of GP models. While straightforward to implement the non-convexity of the marginal likelihood surface can pose significant challenges for ML-II. The presence of multiple modes can make the process prone to overfitting especially when there are many hyperparameters. Further, weakly identified hyperparameters can manifest in flat ridges in the marginal likelihood surface (where different combinations of hyperparameters give similar marginal likelihood value) making gradient based optimisation extremely sensitive to starting values. Overall, the ML-II point estimates for the hyperparameters are subject to high variability and underestimate prediction uncertainty. The central challenge in extending the Bayesian treatment to hyperparameters in a hierarchical framework is that their posterior is highly intractable; this also renders the predictive posterior intractable. The latter is typically handled numerically by Monte Carlo integration yielding a non-Gaussian predictive posterior; it yields in fact a mixture of GPs. The key question about quantifying uncertainty around covariance hyperparameters is examining how this effect propagates to the posterior predictive distribution under different approximation schemes. Given observations (X, y) where y i are noisy realizations of some latent function values f corrupted with Gaussian noise, j ) denote a positive definite covariance function parameterized with hyperparameters θ and the corresponding covariance matrix K θ. The hierarchical GP framework is given by, Prior over hyperparameters θ ∼ p(θ) The generative model in implies the joint posterior over unknowns given as, where Z is the unknown normalization constant. The predictive distribution for unknown test inputs X integrates over the joint posterior, (where we have suppressed the conditioning over inputs X, X for brevity). The inner integral p(f |f, y, θ)p(f |θ, y)df reduces to the standard GP predictive posterior with fixed hyperparameters, where, where K θ denotes the covariance matrix evaluated between the test inputs X and K * θ denotes the covariance matrix evaluated between the test inputs X and training inputs X. Under a Gaussian noise setting the hierarchical predictive posterior is reduced to, where f is integrated out analytically and θ j are draws from the hyperparameter posterior. The only intractable integral we need to deal with is p(θ|y) ∝ p(y|θ)p(θ) and predictive posterior follows as per eq.. Hence, the hierarchical predictive posterior is a multivariate mixture of Gaussians (Appendix section 6.2). The distinct advantage of HMC over other MCMC methods is the suppression of the random walk behaviour typical of Metropolis and variants. Refer to for a detailed tutorial. In the experiments we use a self-tuning variant of HMC called the No-U-TurnSampler (NUTS) proposed in in which the path length is deterministically adjusted for every iteration. Empirically, NUTS is shown to work as well as a hand-tuned HMC. By using NUTS we avoid the overhead in determining good values for the step-size and path length (L). We use an identity mass matrix with 500 warm-up iterations and run 4 chains to detect mode switching which can sometimes adversely affect predictions. Further, the primary variables are declared as the log of the hyperparameters log(θ) as this eliminates the positivity constraints that we otherwise we need to account for. The computational cost of the HMC scheme is dominated by the need to invert the covariance matrix K θ which is O(N 3). We largely follow the approach in. We transform the support of hyperparameters θ such that they live in the real space R J where J is the number of hyperparameters. Let η = g(θ) = log(θ) and we proceed by setting the variational family to, in the mean-field approximation where λ mf = (µ 1, . . ., µ J, ν 1, . . ., ν J) is the vector of unconstrained variational parameters (log(σ 2 j) = ν j ) which live in R 2J. In the full rank approximation the variational family takes the form, where we use the Cholesky factorization of the covariance matrix Σ so that the variational parameters λ f r = (µ, L) are unconstrained in R J+J(J+1)/2. The variational objective, ELBO is maximised in the transformed η space using stochastic gradient ascent and any intractable expectations are approximated using monte carlo integration. where the term |J g −1 (η)| denotes the Jacobian of the inverse transformation hinges on automatic differentiation and the re-parametrization trick . The computational cost per iteration is O(N M J) where J is the number of hyperparameters and M is the number of MC samples used in computing stochastic gradients. We evaluate 4 UCI benchmark regression data sets under fully Bayesian GPR (see Table 1). For VI we evaluate the mean-field and full-rank approximations. The top line shows the baseline ML-II method. The two metrics shown are: 1) RMSE -square root mean squared error and 2) NLPD -negative log of the predictive density averaged across test data. Except for'wine' which is a near linear dataset, HMC and full-rank variational schemes exceed the performance of ML-II. By looking at Fig.1 one can notice how the prediction intervals under the full Bayesian schemes capture the true data points. HMC generates a wider span of functions relative to VI (indicated by the uncertainty interval 1). The mean-field (MF) performance although inferior to HMC and full-rank (FR) VI still dominates the ML-II method. Further, while HMC is the gold standard and gives a more exact approximation, the VI schemes provide a remarkably close approximation to HMC in terms of error. The higher RMSE of the MF scheme compared to FR and HMC indicates that taking into account correlations between the hyperparameters improves prediction quality. Data set CO 2 Wine Concrete Airline We demonstrate the feasibility of fully Bayesian GPR in the Gaussian likelihood setting for moderate sized high-dimensional data sets with composite kernels. We present a concise comparative analysis across different approximation schemes and find that VI schemes based on the Gaussian variational family are only marginally inferior in terms of predictive performance to the gold standard HMC. While sampling with HMC can be tuned to generate samples from multi-modal posteriors using tempered transitions , the predictions can remain invariant to samples from different hyperparameter modes. Fully Bayesian bottom: Airline). In the CO 2 data where we undertake long-range extrapolation, the uncertainty intervals under the full Bayesian schemes capture the true observations while ML-II underestimates predictive uncertainty. For the Airline dataset, red in each twoway plot denotes ML-II, the uncertainty intervals under the full Bayesian schemes capture the upward trend better than ML-II. The latter also misses on structure that the other schemes capture. inference in GPs is highly intractable and one has to consider the trade-off between computational cost, accuracy and robustness of uncertainty intervals. Most interesting real-world applications of GPs entail hand-crafted kernels involving many hyperparameters where there risk of overfitting is not only higher but also hard to detect. A more robust solution is to integrate over the hyperparameters and compute predictive intervals that reflect these uncertainties. An interesting question is whether conducting inference over hierarchies in GPs increases expressivity and representational power by accounting for a more diverse range of models consistent with the data. More specifically, how does it compare to the expressivity of deep GPs with point estimate hyperparameters. Further, these general approximation schemes can be considered in conjunction with different incarnations of GP models where transformations are used to warp the observation space yielding warped GPs or warp the input space either using parametric transformations like neural nets yielding deep kernel learning or non-parametric ones yielding deep GPs (6. Appendix In early accounts, , and explore the integration over covariance hyperparameters using HMC in the regression and classification setting. More recently, use a slice sampling scheme for covariance hyperparameters in a general likelihood setting specifically addressing the coupling between latent function values f and hyperparameters θ. conduct a comparative evaluation of MCMC schemes for the full Bayesian treatment of GP models. Other works like explore the MCMC approach to variationally sparse GPs by using a scheme that jointly samples inducing points and hyperparameters. explore a full Bayesian inference framework for regression using HMC but only applies to separable covariance structures together with grid-structured inputs for scalability. On the variational learning side, ; jointly select inducing points and hyperparameters, hence the posterior over hyperparameters is obtained as a side-effect where the inducing points are the main goal. In more recent work, propose a novel variational scheme for sparse GPR which extends the Bayesian treatment to hyperparameters. Extract the 2.5 th percentile ⇒ f i(r l) where r l = 2.5 100 × T Extract the 97.5 th percentile ⇒ f i(ru) where r u = 97.5 All the four data sets use composite kernels constructed from base kernels. Table 2 summarizes the base kernels used and the set of hyperparameters for each kernel. All hyperparameters are given vague N priors in log space. Due to the sparsity of Airline data, several of the hyperparameters were weakly identified and in order to constrain inference to a reasonable range we resorted to a tighter normal prior around the ML-II estimates and Gamma(2, 0.1) priors for the noise hyperparameters. All the experiments were done in python using pymc3 . In the case of HMC, 4 chains were run to convergence and one chain was selected to compute predictions. For mean-field and full rank VI, a convergence threshold of 1e-4 was set for the variational parameters, optimisation terminated when all the variational parameters (means and standard deviations) concurrently changed by less than 1e-4. For'wine' and'concrete' data sets we use a random 50/50 training/test split. For'CO 2' we use the first 545 observations as training and for'Airline' we use the first 100 observations as training. Table 2: Base kernels used in the UCI experiments. k SE denotes the squared exponential kernel, k ARD denotes the automatic relevance determination kernel (squared exponential over dimensions), k P er denotes the periodic kernel, k RQ denotes the rational quadratic kernel and k N oise denotes the white kernel for stationary noise. Data set Composite Kernel In the figures and tables below, a prefix's' denotes signal std. deviation, a prefix'ls' denotes lengthscale and a prefix'n' denotes noise std. deviation. The figure below shows marginal posteriors of the hyperparamters used in the Airline kernel. We can make the following remarks: 1. It is evident that sampling and variational optimisation do not converge to the same region of the hyperparameter space as ML-II. 2. Given that the predictions are better under the full Bayesian schemes, this indicates that ML-II is in an inferior local optimum. 3. The mean-field marginal posteriors are narrower than the full rank and HMC posteriors as is expected. Full rank marginal posteriors closely approximate the HMC marginals. 4. The noise std. deviation distribution learnt under the full Bayesian schemes is higher than ML-II point estimate indicating overfitting in this particular example. | Analysis of Bayesian Hyperparameter Inference in Gaussian Process Regression | 1,251 | scitldr |
We consider the problem of using variational latent-variable models for data compression. For such models to produce a compressed binary sequence, which is the universal data representation in a digital world, the latent representation needs to be subjected to entropy coding. Range coding as an entropy coding technique is optimal, but it can fail catastrophically if the computation of the prior differs even slightly between the sending and the receiving side. Unfortunately, this is a common scenario when floating point math is used and the sender and receiver operate on different hardware or software platforms, as numerical round-off is often platform dependent. We propose using integer networks as a universal solution to this problem, and demonstrate that they enable reliable cross-platform encoding and decoding of images using variational models. The task of information transmission in today's world is largely divided into two separate endeavors: source coding, or the representation of data (such as audio or images) as sequences of bits, and channel coding, representing sequences of bits as analog signals on imperfect, physical channels such as radio waves BID7. This decoupling has substantial benefits, as the binary representations of arbitrary data can be seamlessly transmitted over arbitrary physical channels by only changing the underlying channel code, rather than having to design a new code for every possible combination of data source and physical channel. Hence, the universal representation of any compressed data today is the binary channel, a representation which consists of a variable number of binary symbols, each with probability 1 2, and no noise (i.e. uncertainty). As a latent representation, the binary channel unfortunately is a severe restriction compared to the richness of latent representations defined by many variational latent-variable models in the literature (e.g., BID13 BID22 BID18, and in particular models targeted at data compression BID23 BID0 . Variational latent-variable models such as VAEs BID13 consist of an encoder model distribution e(y | x) bringing the data x into a latent representation y, and a decoder model distribution d(x | y), which represents the data likelihood conditioned on the latents. Given an encoder e, we observe the marginal distribution of latents m(y) = E x [e(y | x)], where the expectation runs over the (unknown) data distribution. The prior p(y) is a variational estimate of the marginal BID1.By choosing the parametric forms of these distributions and the training objective appropriately, many such models succeed in representing relevant information in the data they are trained for quite compactly (i.e., with a small expected Kullback-Leibler (KL) divergence between the encoder and the prior, E x D KL [e p]), and so may be called compressive in a sense. However, not all of them can be directly used for practical data compression, as the representation needs to be further converted into binary (entropy encoded). This conversion is typically performed by range coding, or arithmetic coding BID20. Range coding is asymptotically optimal: the length of the binary sequence quickly converges to the expected KL divergence in bits, for reasonably large sequences (such as, for one image). For this to hold, the following requirements must be satisfied: Figure 1: The same image, decoded with a model computing the prior using integer arithmetic (left), and the same model using floating point arithmetic (right). The image was decoded correctly, beginning in the top-left corner, until floating point round-off error caused a small discrepancy between the sender's and the receiver's copy of the prior, at which point the error propagated catastrophically.• The representation must be discrete-valued, i.e. have a finite number of states, and be noiseless -i.e. the conditional entropy of the encoder must be zero: DISPLAYFORM0 • All scalar elements of the representation y must be brought into a total ordering, and the prior needs to be written using the chain rule of calculus (as a product of conditionals), as the algorithm can only encode or decode one scalar random variable at a time.• Both sides of the binary channel (i.e. sender and receiver) must be able to evaluate the prior, and they must have identical instances of it. The latter point is crucial, as range coding is extremely sensitive to differences in p between sender and receiver -so sensitive, in fact, that even small perturbations due to floating point round-off error can lead to catastrophic error propagation. Unfortunately, numerical round-off is highly platform dependent, and in typical data compression applications, sender and receiver may well employ different hardware or software platforms. Round-off error may even be non-deterministic on one and the same computer. Figure 1 illustrates a decoding failure in a model which computes p using floating point math, caused by such computational non-determinism in sender vs. receiver. Recently, latent-variable models have been explored that employ artificial neural networks (ANNs) to compute hierarchical or autoregressive priors BID22 BID18, including some of the best-performing learned image compression models BID17 BID14. Because ANNs are typically based on floating point math, these methods are vulnerable to catastrophic failures when deployed on heterogeneous platforms. To address this problem, and enable use of powerful learned variational models for real-world data compression, we propose to use integer arithmetic in these ANNs, as floating-point arithmetic cannot presently be made deterministic across arbitrary platforms. We formulate a type of quantized neural network we call integer networks, which are specifically targeted at generative and compression models, and at preventing computational non-determinism in computation of the prior. Because full determinism is a feature of many existing, widely used image and video compression methods, we also consider using integer networks end to end for computing the representation itself. ANNs are typically composite functions that alternate between linear and elementwise nonlinear operations. One linear operation followed by a nonlinearity is considered one layer of the network. To ensure that such a network can be implemented deterministically on a wide variety of hardware platforms, we restrict all the data types to be integral, and all operations to be implemented either with basic arithmetic or lookup tables. Because integer multiplications (including matrix multiplications or convolutions) increase the dynamic range of the output compared to their inputs, we introduce an additional step after each linear operator, where we divide each of its output by a learned parameter.. This nonlinearity can be implemented deterministically either using a lookup table or simply using a clipping operation. The corresponding scaled cumulative of a generalized Gaussian with β = 4 used for computing gradients is plotted in cyan, and other choices of β in gray. Right: Example nonlinearity approximating hyperbolic tangent for 4-bit signed integer outputs, given by g Qtanh (v) = Q(7 tanh( v 15)). This nonlinearity can be implemented deterministically using a lookup table. The corresponding scaled hyperbolic tangent used for computing gradients is plotted in cyan. Concretely, we define the relationship between inputs u and outputs w of one layer as: DISPLAYFORM0 In order, the inputs u are subjected to a linear transform H (a matrix multiplication, or a convolution); a bias vector b is added; the is divided elementwise by a vector c, yielding an intermediate vector v; and finally, an elementwise nonlinearity g is applied to v. The activations w and all intermediate , as well as the parameters H, b, and c are all defined as integers. However, they may use differing number formats. For v to be integral, we define here to perform rounding division (equivalent to division followed by rounding to the nearest integer). In programming languages such as C, this can be implemented with integer operands m, n as DISPLAYFORM1 where Q rounds to the nearest integer and / / is floor division; here, the addition can be folded into the bias b as an optimization. We constrain the linear filter coefficients H and the bias vector b to generally use signed integers, and the scaling vector c to use unsigned integers. We implement the accumulators of the linear transform with larger bit width than the activations and filter coefficients, in order to reflect the potentially increased dynamic range of multiplicative operations. We assume here that the bias and scaling vectors, as well as the intermediate vector v, have the same bit width as the accumulators. The elementwise nonlinearity g must be saturating on both ends of its domain, because integers can only represent finite number ranges. In order to maximize utility of the dynamic range, we scale nonlinearities such that their range matches the bit width of w, while their domain can be scaled somewhat arbitrarily. Depending on the range of the nonlinearity, the activations w may use a signed or unsigned number format. For instance, a reasonable choice of number formats and nonlinearity would be:H: 8-bit signed b, v: 32-bit signed (same as accumulator)c: 32-bit unsigned w: 8-bit unsigned g QReLU (v) = max (min(v, 255), 0) In this example, the nonlinearity can be implemented with a simple clipping operation. Refer to figure 2, left, for a visualization (for visualization purposes, the figure shows a smaller bit width). H: 4-bit signed b, v: 16-bit signed (same as accumulator) c: 16-bit unsigned w: 4-bit signed DISPLAYFORM0 Here, the nonlinearity approximates the hyperbolic tangent, a widely used nonlinearity. It may be best implemented using a lookup table (see figure 2, right, for a visualization). We scale its range to fill the 4-bit signed integer number format of w by multiplying its output with 7. The domain can be scaled somewhat arbitrarily, since v has a larger bit width than w. When it is chosen too small, w may not utilize all integer values, leading to a large quantization error. When it is chosen too large, overflow may occur in v, or the size of the lookup table may grow too large for practical purposes. Therefore, it is best to determine the input scaling based on the shape of the nonlinearity and the available dynamic range. Here, we simply chose the value of 15 "by eye", so that the nonlinearity is reasonably well represented with the lookup table (i.e., we made sure that at least two or three input values are mapped to each output value, in order to preserve the approximate shape of the nonlinearity). To effectively accumulate small gradient signals, we train the networks entirely using floating point computations, rounded to integers after every computational operation, while the backpropagation is done with full floating point precision. More concretely, we define the integer parameters H, b, and c as functions of their floating point equivalents H, b, and c, respectively: DISPLAYFORM0... DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Here, we simply rescale each element of b using a constant K, which is the bit-width of the kernel H (e.g. 8-bits in the QReLu networks), and round it to the nearest integer. The reparameterization mapping r is borrowed from: DISPLAYFORM4 When c is small, perturbations in c can lead to excessively large fluctuations of the quotient (i.e., the input to the nonlinearity). This leads to instabilities in training. r ensures that values of c are always positive, while gracefully scaling down gradient magnitudes on c near zero. Effectively, the step size on c is multiplied with a factor that is approximately linear in c.Before rounding the linear filter coefficients in H = [h 1, . . ., h N], we apply a special rescaling function s to each of its filters h: DISPLAYFORM5 s rescales each filter such that at least one of its minimum and maximum coefficients hits one of the dynamic range bounds (−2 K−1 and 2 K−1 −1), while keeping zero at zero. This represents the finest possible quantization of the filter given its integer representation, and thus maximizes accuracy. To prevent division by zero, we ensure the divisor is larger than or equal to a small constant (for example, = 10 −20).In order to backpropagate gradient signals into the parameters, one cannot simply take gradients of the loss function with respect to H, b, or c, since the rounding function Q has zero gradients almost everywhere, except for the half-integer positions where the gradient is positive infinity. A simple remedy is to replace the derivative of Q with the identity function, since this is the smoothed gradient across all rounded values. Further, we treat the rescaling divisor s as if it were a constant. That is, we compute the derivatives of the loss function with respect to H, b, and c as with the chain rule of calculus, but overriding: DISPLAYFORM6 where r is the replacement gradient function for r as proposed by. After training is completed, we compute the integer parameters H, b and c one more time, and from then on use them for evaluation. Note that further reparameterization of the kernels H, such as Sadam, or of the biases b or scaling parameters c, is possible by simply chaining reparameterizations. In addition to rounding the parameters, it is necessary to round the activations. To obtain gradients for the rounding division, we simply substitute the gradient of floating point division. To estimate gradients for the rounded activation functions, we replace their gradient with the corresponding nonrounded activation function, plotted in cyan in figure 2. In the case of QReLU, the gradient of the clipping operation is a box function, which can lead to training getting stuck, since if activations consistently hit one of the bounds, no gradients are propagated back (this is sometimes called the "dead unit" problem). As a remedy, we replace the gradient instead with DISPLAYFORM7 where DISPLAYFORM8 β, and L is the bit width of w. This function corresponds to a scaled generalized Gaussian probability density with shape parameter β. In this context, we can think of β as a temperature parameter that makes the function converge to the gradient of the clipping operation as β goes to infinity. Although this setting permits an annealing schedule, we simply chose β = 4 and obtained good . The integral of this function is plotted in figure 2 (left) in cyan, along with other choices of β in gray. Suppose our prior on the latent representation is p(y | z), where z summarizes other latent variables of the representation (it may be empty). To apply range coding, we need to impose a total ordering on the elements of y and write it as a chain of conditionals: DISPLAYFORM0 where y:i denotes the vector of all elements of y preceding the ith. A common assumption is that p is a known distribution, with parameters θ i computed by an ANN g: DISPLAYFORM1 We simply propose here to compute g deterministically using an integer network, discretizing the parameters θ to a reasonable accuracy. If p(y i | θ i) itself cannot be computed deterministically, we can precompute all possible values and express it as a lookup table over y i and θ i.As an example, consider the prior used in the image compression model proposed by, which is a modified Gaussian with scale parameters conditioned on another latent variable: DISPLAYFORM2 We reformulate the scale parameters σ as: DISPLAYFORM3 where θ = g(z) is computed using an integer network. The last activation function in g is chosen to have integer outputs of L levels in the range [0, L − 1]. Constants σ min, σ max, and L determine the discretized selection of scale parameters used in the model. The discretization is chosen to be logarithmic, as this choice minimizes E x D KL [e p] for a given number of levels. During training, we can simply backpropagate through this reformulation, and through g as described in the previous section. After training, we precompute all possible values of p as a function of y i and θ i and form a lookup table, while g is implemented with integer arithmetic. For certain applications, it can be useful not only to be able to deploy a compression model across heterogenous platforms, but to go even further in also ensuring identical reconstructions of the data across platforms. To this end, it can be attractive to make the entire model robust to non-determinism. To use integer networks in the encoder or decoder, one can use the equivalent construction as in FORMULA0 Jang et al. FORMULA0 and Ágústsson et al. are concerned with producing gradients for categorical distributions and vector quantization (VQ), respectively. In both methods, the representation is found by evaluating an ANN followed by an arg max function, while useful gradients are obtained by substituting the arg max with a softmax function. Since arg max can be evaluated deterministically in a platform-independent way, and evaluating a softmax function with rounded inputs is feasible, integer networks can be combined with these models without additional modifications. Theis et al. FORMULA0 and differ mostly in the details of interaction between the encoder and the prior. These two approaches are particularly interesting for image compression, as they scale well: Image compression models are often trained with a rate-distortion objective with a Lagrange parameter λ, equivalent to β in the β-VAE objective BID10 BID1. Depending on the parameter, the latent representation carries vastly different amounts of information, and the optimal number of latent states in turn varies with that. While the number of latent states is a hyperparameter that needs to be chosen ahead of time in the categorical/VQ case, the latter two approaches can extend it as needed during training, because the latent states are organized along the real line. Further, for categorical distributions as well as VQ, the required dimensionality of the function computing the parameters grows linearly with the number of latent states due to their use of the arg max function. In the latter two models, the number of states can grow arbitrarily without increasing the dimensionality of g. Both BID23 and use deterministic encoder distributions (i.e. degenerating to delta distributions) during evaluation, but replace them with probabilistic versions for purposes of estimating E x D KL [e p] during training. BID23 propose to use the following encoder distribution: DISPLAYFORM0 where U is the uniform distribution and g is an ANN. They replace the gradient of the quantizer with the identity. During evaluation, y = Q(g(x)) is used as the representation. use the following distribution during training: DISPLAYFORM1 which makes y shift-invariant. During evaluation, they determine the representation as y = Q(g(x) − o), where o is a sub-integer offset chosen such that the mode (or, if it cannot be estimated easily, the median) of the distribution is centered on one of the quantization bins. If g is implemented with integer networks, the latter approach becomes equivalent to the former, because g then inherently computes integer outputs, and this is effectively equivalent to the quantization in. However, we've found that training with this construction leads to instabilities, such that the prior distribution never converges to a stable set of parameters. The reason may be that with quantization in e, the marginal m(y) = E x e(y | x) resembles a piecewise constant function, while the prior p must be forced to be smooth, or E x D KL [e p] would not yield any useful gradients. Because the prior is a variational approximation of the marginal, this means that the prior must be regularized (which we did not attempt here -we used the nonparametric density model described in). On the other hand, when using without quantization, the marginal is typically a smooth density, and the prior can approximate it closely without the need for regularization. As a remedy for the instabilities, we propose the following trick: We simply use during training, but define the last layer of g without a nonlinearity and with floating point division, such that the representation is compressed on CPU 1 CPU 1 CPU 1 CPU 1 GPU 1 GPU 1 GPU 1 GPU 1 decompressed on CPU 1 GPU 1 CPU 2 GPU 2 CPU 1 GPU 1 CPU 2 GPU 2 Tecnick dataset: 100 RGB images of 1200 × 1200 pixels 0% 71% 54% 66% 63% 41% 59% 34% ditto, integer prior 0% 0% 0% 0% 0% 0% 0% 0% CLIC dataset: 2021 RGB images of various pixel sizes 0% 78% 68% 78% 77% 52% 78% 54% ditto, integer prior 0% 0% 0% 0% 0% 0% 0% 0% DISPLAYFORM2 CPU 1: Intel Xeon E5-1650 GPU 1: NVIDIA Titan X (Pascal) CPU 2: Intel Xeon E5-2690 GPU 2: NVIDIA Titan X (Maxwell) Table 1: Decompression failure rates due to floating point round-off error on Tecnick and CLIC image datasets. When compressing and decompressing on the same CPU platform (first column), the model decompresses all images correctly. However, when compressing on a GPU or decompressing on a different platform, a large percentage of the images fail to be decoded correctly. Implementing the prior of the same model using integer networks ensures correct decompression across all tested platforms.during training, where u is the input to the last layer and / represents elementwise floating point division, and DISPLAYFORM3 during evaluation. This can be rewritten strictly using integer arithmetic as: DISPLAYFORM4 where represents elementwise multiplication, and the rounded product can be folded into the bias b as an optimization. This way, the representation is computed deterministically during evaluation, while during training, the marginal still resembles a smooth function, such that no regularization of the prior is necessary. In order to assess the efficacy of integer networks to enable platform-independent compression and decompression, we re-implemented the image compression model described in, FORMULA0 model, evaluated on BID15, corresponding to the rate point at approximately 0.7 bits per pixel in FIG2, right panel. Generally, training of integer models takes somewhat longer and is somewhat noisier than training of floating point models. When matching floating point and integer networks for asymptotic performance (128 vs. 256 filters, respectively), integer networks take longer to converge (likely due to their larger number of filters). When matching by number of filters FORMULA0, it appears that the training time to convergence is about the same, but the performance ends up worse.which is defined with a hyperprior. We compare the original model with a version in which the network h s computing the prior is replaced with an integer network. We used the same network architectures in terms of number of layers, filters, etc., and the same training parameters as in the original paper. The rate-distortion performance of the model was assessed on BID15 and is shown in FIG2 (left). The modified model performs identically to the original model, as it maps out the same rate-distortion frontier. However, it is much more robust to cross-platform compression and decompression (table 1). We tested compression and decompression on four different platforms (two CPU platforms and two GPU platforms) and two different datasets, Tecnick BID2 BID2. The original model fails to correctly decompress more than half of the images on average when compression and decompression occurs on different platforms. The modified model brings the failure rate down to 0% in all cases. It should be noted that the decreased accuracy of integer arithmetic generally leads to a lower approximation capacity than with floating point networks. We found that when implementing the models described in Ballé using integer networks throughout, the rate-distortion performance decreased (figure 3, right). The loss in approximation capacity can be compensated for by increasing the number of filters per layer. Note that this appears to increase the training time necessary for convergence (figure 4). However, note that increasing the number of parameters may not necessarily increase the size of the model parameters or the runtime, as the storage requirements for integer parameters (kernels, biases, etc.) are lower than for floating point parameters, and integer arithmetic is computationally less complex than floating point arithmetic in general. There is a large body of recent research considering quantization of ANNs mostly targeted at image recognition applications. BID6 train classification networks on lower precision multiplication. BID11 and BID19 perform quantization down to bilevel (i.e., 1-bit integers) at inference time to reduce computation in classification networks. More recently, BID24 and others have used quantization during training as well as inference, to reduce computation on gradients as well as activations, and BID5 use non-uniform quantization to remove floating point computation, replacing it completely with integer offsets into an integer lookup table. While the quantization of neural networks is not a new topic, the from the above techniques focus almost exclusively on classification networks. BID8, BID9, and others have demonstrated that these types of networks are particularly robust to capacity reduction. Models used for image compression, like many generative models, are much more sensitive to capacity constraints since they tend to underfit. As illustrated in and in figure 3 (right), this class of models is much more sensitive to reductions of capacity, both in terms of network size and the expressive power of the activation function. This may explain why our experiments with post-hoc quantization of network activations have never yielded competitive for this class of model (not shown).As illustrated in figure 1 and table 1, small floating point inconsistencies in variational latent-variable models can have disastrous effects when we use range coding to employ the models for data compression across different hardware or software platforms. The reader may wonder whether there exists other entropy coding algorithms that can convert discrete latent-variable representations into a binary representation, and which do not suffer from a sensitivity to perturbations in the probability model. Unfortunately, such an algorithm would always produce suboptimal for the following reason. The source coding theorem BID21 ) establishes a lower bound on the average length of the ing bit sequences, which range coding achieves asymptotically (i.e. for long bit sequences). The lower bound is given by the cross entropy between the marginal and the prior: DISPLAYFORM0 where |b(y)| is the length of the binary representation of y. If an entropy coding algorithm tolerates error in the values of p(y | θ), this means it must operate under the assumption of identical probability values for a range of values of θ -in other words, discretize the probability values. Since the cross entropy is minimal only for p(y | θ) = m(y) (for all y), this would impose a new lower bound on |b(y)| given by the cross entropy with the discretized probabilities, which is greater or equal to the cross entropy given above. Thus, the more tolerant the entropy coding method is to errors in p, the further it deviates from optimal performance. Moreover, it is hard to establish tolerance intervals for probability values computed with floating point arithmetic, in particular when ANNs are used, due to error propagation. Hence, it is generally difficult to provide guarantees that a given tolerance will not be exceeded. For similar reasons, current commercial compression methods model probabilities exclusively in the discrete domain (e.g., using lookup tables; BID16.Our approach to neural network quantization is the first we are aware of which specifically addresses non-deterministic computation, as opposed to computational complexity. It enables a variety of possible variational model architectures and distributions to be effectively used for platformindependent data compression. While we aren't assessing its effects on computational complexity here, it is conceivable that complexity reductions can also be achieved with the same approach; this is a topic for future work. | We train variational models with quantized networks for computational determinism. This enables using them for cross-platform data compression. | 1,252 | scitldr |
Neural networks powered with external memory simulate computer behaviors. These models, which use the memory to store data for a neural controller, can learn algorithms and other complex tasks. In this paper, we introduce a new memory to store weights for the controller, analogous to the stored-program memory in modern computer architectures. The proposed model, dubbed Neural Stored-program Memory, augments current memory-augmented neural networks, creating differentiable machines that can switch programs through time, adapt to variable contexts and thus fully resemble the Universal Turing Machine. A wide range of experiments demonstrate that the ing machines not only excel in classical algorithmic problems, but also have potential for compositional, continual, few-shot learning and question-answering tasks. Recurrent Neural Networks (RNNs) are Turing-complete . However, in practice RNNs struggle to learn simple procedures as they lack explicit memory . These findings have sparked a new research direction called Memory Augmented Neural Networks (MANNs) that emulate modern computer behavior by detaching memorization from computation via memory and controller network, respectively. MANNs have demonstrated significant improvements over memory-less RNNs in various sequential learning tasks a; ). Nonetheless, MANNs have barely simulated general-purpose computers. Current MANNs miss a key concept in computer design: stored-program memory. The concept has emerged from the idea of Universal Turing Machine (UTM) and further developed in Harvard Architecture , Von Neumann Architecture (von . In UTM, both data and programs that manipulate the data are stored in memory. A control unit then reads the programs from the memory and executes them with the data. This mechanism allows flexibility to perform universal computations. Unfortunately, current MANNs such as Neural Turing Machine (NTM) , Differentiable Neural Computer (DNC) and Least Recently Used Access (LRUA) only support memory for data and embed a single program into the controller network, which goes against the stored-program memory principle. Our goal is to advance a step further towards UTM by coupling a MANN with an external program memory. The program memory co-exists with the data memory in the MANN, providing more flexibility, reuseability and modularity in learning complicated tasks. The program memory stores the weights of the MANN's controller network, which are retrieved quickly via a key-value attention mechanism across timesteps yet updated slowly via backpropagation. By introducing a meta network to moderate the operations of the program memory, our model, henceforth referred to as Neural Stored-program Memory (NSM), can learn to switch the programs/weights in the controller network appropriately, adapting to different functionalities aligning with different parts of a sequential task, or different tasks in continual and few-shot learning. To validate our proposal, the NTM armed with NSM, namely Neural Universal Turing Machine (NUTM), is tested on a variety of synthetic tasks including algorithmic tasks from , composition of algorithmic tasks and continual procedure learning. For these algorithmic problems, we demonstrate clear improvements of NUTM over NTM. Further, we investigate NUTM in few-shot learning by using LRUA as the MANN and achieve notably better . Finally, we expand NUTM application to linguistic problems by equipping NUTM with DNC core and achieve competitive performances against stateof-the-arts in the bAbI task. Taken together, our study advances neural network simulation of Turing Machines to neural architecture for Universal Turing Machines. This develops a new class of MANNs that can store and query both the weights and data of their own controllers, thereby following the stored-program principle. A set of five diverse experiments demonstrate the computational universality of the approach. In this section, we briefly review MANN and its relations to Turing Machines. A MANN consists of a controller network and an external memory M ∈ R N ×M, which is a collection of N M -dimensional vectors. The controller network is responsible for accessing the memory, updating its state and optionally producing output at each timestep. The first two functions are executed by an interface network and a state network 1, respectively. Usually, the interface network is a Feedforward neural network whose input is c t -the output of the state network implemented as RNNs. Let W c denote the weight of the interface network, then the state update and memory control are as follows, where x t and r t−1 are data from current input and the previous memory read, respectively. The interface vector ξ t then is used to read from and write to the memory M. We use a generic notation memory (ξ t, M) to represent these memory operations that either update or retrieve read value r t from the memory. To support multiple memory accesses per step, the interface network may produce multiple interfaces, also known as control heads. Readers are referred to App. F and Graves et al. (2014; ; for details of memory read/write examples. A deterministic one-tape Turing Machine can be defined by 4-tuple (Q, Γ, δ, q 0), in which Q is finite set of states, q 0 ∈ Q is an initial state, Γ is finite set of symbol stored in the tape (the data) and δ is the transition function (the program), δ: Q × Γ → Γ × {−1, 1} × Q. At each step, the machine performs the transition function, which takes the current state and the read value from the tape as inputs and outputs actions including writing new values, moving tape head to new location (left/right) and jumping to another state. Roughly mapping to current MANNs, Q, Γ and δ map to the set of the controller states, the read values and the controller network, respectively. Further, the function δ can be factorized into two sub functions: Q × Γ → Γ × {−1, 1} and Q × Γ → Q, which correspond to the interface and state networks, respectively. By encoding a Turing Machine into the tape, one can build a UTM that simulates the encoded machine . The transition function δ u of the UTM queries the encoded Turing Machine that solves the considering task. Amongst 4 tuples, δ is the most important and hence uses most of the encoding bits. In other words, if we assume that the space of Q, Γ and q 0 are shared amongst Turing Machines, we can simulate any Turing Machine by encoding only its transition function δ. Translating to neural language, if we can store the controller network into a queriable memory and make use of it, we can build a Neural Universal Turing Machine. Using NSM is a simple way to achieve this goal, which we introduce in the subsequent section. A Neural Stored-program Memory (NSM) is a key-value memory M p ∈ R P ×(K+S), whose values are the basis weights of another neural network−the programs. P, K, and S are the number of programs, the key space dimension and the program size, respectively. This concept is a hybrid between the traditional slow-weight and fast-weight . Like slow-weight, the keys and values in NSM are updated gradually by backpropagation. However, the values are dynamically interpolated to produce the working weight on-the-fly during the processing of a sequence, which resembles fast-weight computation. Let us denote M p (i).k and M p (i).v as the key and the program of the i-th memory slot. At timestep t, given a query key k p t, the working program is retrieved as follows, where D (·) is cosine similarity and β p t is the scalar program strength parameter. The vector working program p t is then reshaped to its matrix form and ready to be used as the weight of other neural networks. The key-value design is essential for convenient memory access as the size of the program stored in M p can be millions of dimensions and thus, direct content-based addressing as in Graves et al. (2014; ; is infeasible. More importantly, we can inject external control on the behavior of the memory by imposing constraints on the key space. For example, program collapse will happen when the keys stored in the memory stay close to each other. When this happens, p t is a balanced mixture of all programs regardless of the query key and thus having multiple programs is useless. We can avoid this phenomenon by minimizing a regularization loss defined as the following, It turns out that the combination of MANN and NSM approximates a Universal Turing Machine (Sec. 2). At each timestep, the controller in MANN reads its state and memory to generate control signal to the memory via the interface network W c, then updates its state using the state network RN N. Since the parameters of RN N and W c represent the encoding of δ, we should store both into NSM to completely encode an MANN. For simplicity, in this paper, we only use NSM to store W c, which is equivalent to the Universal Turing Machine that can simulate any one-state Turing Machine. In traditional MANN, W c is constant across timesteps and only updated slowly during training, typically through backpropagation. In our design, we compute W c t from NSM for every timestep and thus, we need a program interface network−the meta network P I −that generates an interface vector for the program memory: ξ That is, only the meta-network learns the mapping from context c t to program. When it falls into some local-minima (generating suboptimal w p t), the metanetwork struggles to escape. In our proposal, together with the meta-network, the memory keys are learnable. When the memory keys are slowly updated, the meta-network will shift its query key generation to match the new memory keys and possibly escape from the local-minima. For the case of multi-head NTM, we implement one NSM per control head and name this model Neural Universal Turing Machine (NUTM). One NSM per head is to ensure programs for one head do not interfere with other heads and thus, encourage functionality separation amongst heads. Each control head will read from (for read head) or write to (for write head) the data memory M via memory (ξ t, M) as described in. It should be noted that using multiple heads is unlike using multiple controllers per head. The former increases the number of accesses to the data memory at each timestep and employs a fixed controller to compute multiple heads, which may improve capacity yet does not enable adaptability. On the contrary, the latter varies the property of each memory access across timesteps by switching the controllers and thus potential for adaptation. Other MANNs such as DNC and LRUA can be armed with NSM in this manner. We also employ the regularization loss l p to prevent the programs from collapsing, ing in a final loss as follows, where Loss pred is the prediction loss and η t is annealing factor, reducing as the training step increases. The details of NUTM operations are presented in Algorithm 1. Learning to access memory is a multi-dimensional regression problem. Given the input c t, which is derived from the state h t of the controller, the aim is to generate a correct interface vector ξ t via optimizing the interface network. Instead of searching for one transformation that maps the whole space of c t to the optimal space of ξ t, NSM first partitions the space of c t into subspaces, then finds multiple transformations, each of which covers subspace of Require: a sequence x = {x t} T t=1, a data memory M and R program memories {M p,n} R n=1 corresponding to R control heads 1: Initilize h 0, r 0 2: for t = 1, T do 3: RN N can be replaced by GRU/LSTM 4: for n = 1, R do Compute the program interface ξ p t,n ← P I,n (c t) 6: Compute the data interface ξ t,n ← c t W c t,n 8: Read r t,n from memory M (if read head) or update memory M (if write head) using memory n (ξ t,n, M) c t. The program interface network P I is a meta learner that routes c t to the appropriate transformation, which then maps c t to the ξ t space. This is analogous to multilevel regression in statistics . Practical studies have shown that multilevel regression is better than ordinary regression if the input is clustered . RNNs have the capacity to learn to perform finite state computations (; Tiňo et al., 1998). The states of a RNN must be grouped into partitions representing the states of the generating automaton. As Turing Machines are finite state automata augmented with an external memory tape, we expect MANN, if learnt well, will organize its state space clustered in a way to reflect the states of the emulated Turing Machine. That is, h t as well as c t should be clustered. We realize that NSM helps NTM learn better clusterization over this space (see App. A), thereby improving NTM's performances. In this section, we investigate the performance of NUTM on algorithmic tasks introduced in doubles the length of training sequences in the Copy task. In these tasks, the model will be fed a sequence of input items and is required to infer a sequence of output items. Each item is represented by a binary vector. In the experiment, we compare two models: NTM 2 and NUTM with two programs. Although the tasks are atomic, we argue that there should be at least two memory manipulation schemes across timesteps, one for encoding the inputs to the memory and another for decoding the output from the memory. The two models are trained with cross-entropy objective function under the same setting as in. For fair comparison, the controller hidden dimension of NUTM is set smaller to make the total number of parameters of NUTM equivalent to that of NTM. The number of memory heads for both models are always equal and set to the same value as in the original paper (details in App. C). We run each experiments five times and report the mean with error bars of training losses for NTM tasks in Fig. 2 (a). Except for the Copy task, which is too simple, other tasks observe convergence speed improvement of NUTM over that of NTM, thereby validating the benefit of using two programs across timesteps even for the single task setting. NUTM requires fewer training samples to converge and it generalizes better to unseen sequences that are longer than training sequences. Table 1 reports the test of the best models chosen after five runs and confirms the outperformance of NUTM over NTM for generalization. To illustrate the program usage, we plot NUTM's program distributions across timesteps for Repeat Copy and Priority Sort in Fig. 3 (a) and (b), respectively. Examining the read head for Repeat Copy, we observe two program usage patterns corresponding to the encoding and decoding phases. As there is no reading in encoding, NUTM assigns the "no-read" strategy mainly to the "orange program". In decoding, the sequential reading is mostly done by the "blue program" with some contributions from the "orange program" when resetting reading head. Similar behaviors can be found in the write head for Priority Sort. While the encoding "fitting writing" (see for explanation on the strategy) is often executed by the "blue program", the decoding writing is completely taken by the "orange" program (more visualizations in App. B). In this section, we conduct an ablation study on Associative Recall (AR) to validate the benefit of proposed components that constitute NSM. We run the task with three additional baselines: NUTM using direct attention (DA), NUTM using key-value without regularization (KV), NUTM using fixed, uniform program distribution (UP) and a vanilla NTM with 2 memory heads (h = 2). The meta-network P I in DA generates the attention weight w p t directly. The KV employs key-value attention yet excludes the regularization loss presented in Eq.. The training curves over 5 runs are plotted in Fig. 2 (b). The demonstrate that DA exhibits fast yet shallow convergence. It tends to fall into local minima, which finally fails to reach zero loss. Key-value attention helps NUTM converge completely with fewer iterations. The performance is further improved with the proposed regularization loss. UP underperforms NUTM as it lacks dynamic programs. The NTM with 2 heads shows slightly better convergence compared to the NTM, yet obviously underperforms NUTM (p = 2) with 1 head and fewer parameters. This validates our argument on the difference between using multiple heads and multiple programs (Sec. 3.2). In neuroscience, sequencing tasks test the ability to remember a series of tasks and switch tasks alternatively . A dysfunctional brain may have difficulty in changing from one task to the next and get stuck in its preferred task (perseveration phenomenon). To analyze this problem in NTM, we propose a new set of experiments in which a task is generated by sequencing a list of subtasks. The set of subtasks is chosen from the NTM single tasks (excluding Dynamic N-grams for format discrepancy) and the order of subtasks in the sequence is dictated by an indicator vector put at the beginning of the sequence. Amongst possible combinations of subtasks, we choose {Copy, Repeat Copy}(C+RC), {Copy, Associative Recall} (C+AR), {Copy, Priority Sort} (C+PS) and all (C+RC+AC+PS) 3. The learner observes the order indicator followed by a sequence of subtasks' input items and is requested to consecutively produce the output items of each subtasks. As shown in Fig. 4, some tasks such as Copy and Associative Recall, which are easy to solve if trained separately, become unsolvable by NTM when sequenced together. One reason is NTM fails to change the memory access behavior (perseveration). For examples, NTM keeps following repeat copy reading strategy for all timesteps in C+RC task (Fig. 3 (d) ). Meanwhile, NUTM can learn to change program distribution when a new subtask appears in the sequence and thus ensure different accessing strategy per subtask (Fig. 3 (c) ). In continual learning, catastrophic forgetting happens when a neural network quickly forgets previously acquired skills upon learning new skills . In this section, we prove the versatility of NSM by showing that a naive application of NSM without much modification can help NTM to mitigate catastrophic forgetting. We design an experiment similar to the Split MNIST to investigate whether NSM can improve NTM's performance. In our experiment, we let the models see the training data from the while freezing others, we force "hard" attention over the programs by replacing the softmax function in Eq. 5 with the Gumbel-softmax . Also, to ignore catastrophic forgetting in the state network, we use Feedforward controllers in the two baselines. After finishing one task, we evaluate the bit accuracy −measured by 1−(bit error per sequence/total bits per sequence) over 4 tasks. As shown in in Fig. 5, NUTM outperforms NTM by a moderate margin (10-40% per task). Although NUTM also experiences catastrophic forgetting, it somehow preserves some memories of previous tasks. Especially, NUTM keeps performing perfectly on Copy even after it learns Repeat Copy. For other dissimilar task transitions, the performance drops significantly, which requires more effort to bring NSM to continual learning. Few-shot learning or meta learning tests the ability to rapidly adapt within a task while gradually capturing the way the task structure varies . By storing sampleclass bindings, MANNs are capable of classifying new data after seeing only few samples . As NSM gives flexible memory controls, it makes MANN more adaptive to changes and thus perform better in this setting. To verify that, we apply NSM to the LRUA memory and follow the experiments introduced in , using the Omniglot dataset to measure few-shot classification accuracy. The dataset includes images of 1623 characters, with 20 examples of each character. During training, a sequence (episode) of images are randomly selected from C classes of characters in the training set (1200 characters), where C = 5, 10 corresponding to sequence length of 50, 75, respectively. Each class is assigned a random label which shuffles between episodes and is revealed to the models after each prediction. After 100,000 episodes of training, the models are tested with unseen images from the testing set (423 characters). The two baselines are MANN and NUTM (both use LRUA core). For NUTM, we only tune p and pick the best values: p = 2 and p = 3 for 5 classes and 10 classes, respectively. Table 2: Test-set classification accuracy (%) on the Omniglot dataset after 100,000 episodes of training. * denotes available from . Model Error DNC 16.7 ± 7.6 SDNC 6.4 ± 2.5 ADNC 6.3 ± 2.7 DNC-MD 9.5 ± 1.6 NUTM (DNC core, p=1) 9.7 ± 3.5 NUTM (DNC core, p=2) 7.5 ± 1.6 NUTM (DNC core, p=4) 5.6 ± 1.9 persistent memory mode, which demands fast forgetting old experiences in previous episodes, NUTM 4. Readers are referred to App. D for more details on learning curves and more of the models. Reading comprehension typically involves an iterative process of multiple actions such as reading the story, reading the question, outputting the answers and other implicit reasoning steps. We apply NUTM to the question answering domain by replacing the NTM core with DNC. Compared to NTM's sequential addressing, dynamic memory addressing in DNC is more powerful and thus suitable for NSM integration to solve non-algorithmic problems such as question answering. Following previous works of DNC, we use bAbI dataset to measure the performance of the NUTM with DNC core (three variants p = 1, p = 2 and p = 4). In the dataset, each story is followed by a series of questions and the network reads all word by word, then predicts the answers. Although synthetically generated, bAbI is a good benchmark that tests 20 aspects of natural language reasoning including complex skills such as induction and counting, We found that increasing number of programs helps NUTM improve performance. In particular, NUTM with 4 programs, after 50 epochs jointly trained on all 20 question types, can achieve a mean test error rate of 3.3% and manages to solve 19/20 tasks (a task is considered solved if its error <5%). The mean and s.d. across 10 runs are also compared with other reported by recent works (see Table 3). Excluding baselines under different setups, our is the best reported mean on bAbI that we are aware of. More details are described in App. E. Previous investigations into MANNs mostly revolve around memory access mechanisms. The works in Graves et al. (2014; introduce content-based, location-based and dynamic memory reading/writing. scales to bigger memory by sparse access; optimizes memory operations with uniform writing; and MANNs with extra memory have been proposed (b). However, these works keep using memory for storing data rather than the weights of the network and thus parallel to our approach. Other DNC modifications are also orthogonal to our work. Another line of related work involves modularization of neural networks, which is designed for visual question answering. In module networks (b; a), the modules are manually aligned with predefined concepts and the order of execution is decided by the question. Although the module in these works resembles the program in NSM, our model is more generic and flexible with soft-attention over programs and thus fully differentiable. Further, the motivation of NSM does not limit to a specific application. Rather, NSM aims to help MANN reach general-purpose computability. If we view NSM network as a dynamic weight generator, the program in NSM can be linked to fast weight (von der ; ; b). These papers share the idea of using different weights across timesteps to enable dynamic adaptation. Using outer-product is a common way to implement fast-weight (a;). These fast weights are directly generated and thus different from our programs, which are interpolated from a set of slow weights. Tensor/Multiplicative RNN and Hypernetwork are also relevant related works. These methods attempt to make the working weight of RNNs dependent on the input to enable quick adaption through time. Nevertheless, they do not support modularity. In particular, Hypernetwork generates scaling factors for the single weight of the main RNN. It does not aim to use multiple slow-weights (programs) and thus, different from our approach. Tensor RNN is closer to our idea when the authors propose to store M slow-weights, where M is the number of input dimension, which is acknowledged impractical. Unlike our approach, they do not use a meta-network to generate convex combinations amongst weights. Instead, they propose Multiplicative RNN that factorizes the working weight to product of three matrices, which looses modularity. On the contrary, we explicitly model the working weight as an interpolation of multiple programs and use a meta-network to generate the coefficients. This design facilitates modularity because each program is trained towards some functionality and can be switched or combined with each other to perform the current task. Last but not least, while the related works focus on improving RNN with fast-weight, we aim to reach a neural simulation of Universal Turing Machine, in which fast-weight is a way to implement stored-program principle. This paper introduces the Neural Stored-program Memory (NSM), a new type of external memory for neural networks. The memory, which takes inspirations from the stored-program memory in computer architecture, gives memory-augmented neural networks (MANNs) flexibility to change their control programs through time while maintaining differentiability. The mechanism simulates modern computer behavior, potential making MANNs truly neural computers. Our experiments demonstrated that when coupled with our model, the Neural Turing Machine learns algorithms better and adapts faster to new tasks at both sequence and sample levels. When used in few-shot learning, our method helps MANN as well. We also applied the NSM to the Differentiable Neural Computer and observed a significant improvement, reaching the state-of-the-arts in the bAbI task. Although this paper limits to MANN integration, other neural networks can also reap benefits from our proposed model, which will be explored in future works. Table 9: Task settings (continual procedure learning tasks). We use similar hyper-parameters as in , which are reported in Tab Table 11: Test-set classification accuracy (%) on the Omniglot dataset after 100,000 episodes of training. * denotes available from (some are estimated from plotted figures). We train the models using RMSprop optimizer with fixed learning rate of 10 −4 and momentum of 0.9. The batch size is 32 and we adopt layer normalization to DNC's layers. practice, we also remove temporal linkage for faster training. The details of hyper-parameters are listed in Table 12. Full NUTM (p = 4) are reported in 3.3 5.6 ± 1.9 Failed (Err. >5%) 1 3 ± 1.2 Table 13: NUTM (p = 4) bAbI best and mean errors (%). 9 When p = 1, the model converges to layer-normed DNC For all tasks, η t is fixed to 0.1, reducing with decay rate of 0.9. Ablation study's learning losses with mean and error bar are plotted in Fig. 23. | A neural simulation of Universal Turing Machine | 1,253 | scitldr |
It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate $\epsilon$ and scaling the batch size $B \propto \epsilon$. Finally, one can increase the momentum coefficient $m$ and scale $B \propto 1/(1-m)$, although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to 76.1% validation accuracy in under 30 minutes. Stochastic gradient descent (SGD) remains the dominant optimization algorithm of deep learning. However while SGD finds minima that generalize well BID30 BID26, each parameter update only takes a small step towards the objective. Increasing interest has focused on large batch training BID8 BID10 BID27, in an attempt to increase the step size and reduce the number of parameter updates required to train a model. Large batches can be parallelized across many machines, reducing training time. Unfortunately, when we increase the batch size the test set accuracy often falls BID12 BID8.To understand this surprising observation, BID23 argued one should interpret SGD as integrating a stochastic differential equation. They showed that the scale of random fluctuations in the SGD dynamics, g = (N B − 1), where is the learning rate, N training set size and B batch size. Furthermore, they found that there is an optimum fluctuation scale g which maximizes the test set accuracy (at constant learning rate), and this introduces an optimal batch size proportional to the learning rate when B N. BID8 already observed this scaling rule empirically and exploited it to train ResNet-50 to 76.3% ImageNet validation accuracy in one hour. Here we show,• When one decays the learning rate, one simultaneously decays the scale of random fluctuations g in the SGD dynamics. Decaying the learning rate is simulated annealing. We propose an alternative procedure; instead of decaying the learning rate, we increase the batch size during training. This strategy achieves near-identical model performance on the test set with the same number of training epochs but significantly fewer parameter updates. Our proposal does not require any fine-tuning as we follow pre-existing training schedules; when the learning rate drops by a factor of α, we instead increase the batch size by α.• As shown previously, we can further reduce the number of parameter updates by increasing the learning rate and scaling B ∝. One can also increase the momentum coefficient and scale B ∝ 1/(1 − m), although this slightly reduces the test accuracy. We train InceptionResNet-V2 on ImageNet in under 2500 parameter updates, using batches of 65536 images, and reach a validation set accuracy of 77%. We also replicate the setup of BID8 on TPU and train ResNet-50 on ImageNet to 76.1% accuracy in under 30 minutes. We note that a number of recent works have discussed increasing the batch size during training BID7 BID3 BID1 BID2 BID5, but to our knowledge no paper has shown empirically that increasing the batch size and decaying the learning rate are quantitatively equivalent. A key contribution of our work is to demonstrate that decaying learning rate schedules can be directly converted into increasing batch size schedules, and vice versa; providing a straightforward pathway towards large batch training. In section 2 we discuss the convergence criteria for SGD in strongly convex minima, in section 3 we interpret decaying learning rates as simulated annealing, and in section 4 we discuss the difficulties of training with large momentum coefficients. Finally in section 5 we present conclusive experimental evidence that the empirical benefits of decaying learning rates in deep learning can be obtained by instead increasing the batch size during training. We exploit this observation and other tricks to achieve efficient large batch training on CIFAR-10 and ImageNet. SGD is a computationally-efficient alternative to full-batch training, but it introduces noise into the gradient, which can obstruct optimization. It is often stated that to reach the minimum of a strongly convex function we should decay the learning rate, such that BID22: DISPLAYFORM0 DISPLAYFORM1 i denotes the learning rate at the i th gradient update. Intuitively, equation 1 ensures we can reach the minimum, no matter how far away our parameters are initialized, while equation 2 ensures that the learning rate decays sufficiently quickly that we converge to the minimum, rather than bouncing around it due to gradient noise BID25. However, although these equations appear to imply that the learning rate must decay during training, equation 2 holds only if the batch size is constant.1 To consider how to proceed when the batch size can vary, we follow recent work by BID23 and interpret SGD as integrating the stochastic differential equation below, DISPLAYFORM2 C represents the cost function (summed over all training examples), and ω represents the parameters, which evolve in continuous "time" t towards their final values. Meanwhile η(t) represents Gaussian random noise, which models the consequences of estimating the gradient on a mini-batch. They showed that the mean η(t) = 0 and variance η(t)η(t) = gF (ω)δ(t − t), where F (ω) describes the covariances in gradient fluctuations between different parameters. They also proved that the "noise scale" g = (DISPLAYFORM3, where is the learning rate, N the training set size and B the batch size. This noise scale controls the magnitude of the random fluctuations in the training dynamics. Usually B N, and so we may approximate g ≈ N/B. When we decay the learning rate, the noise scale falls, enabling us to converge to the minimum of the cost function (this is the origin of equation 2 above). However we can achieve the same reduction in noise scale at constant learning rate by increasing the batch size. The main contribution of this work is to show that it is possible to make efficient use of vast training batches, if one increases the batch size during training at constant learning rate until B ∼ N/10. After this point, we revert to the use of decaying learning rates. To the surprise of many researchers, it is now increasingly accepted that small batch training often generalizes better to the test set than large batch training. This "generalization gap" was explored extensively by BID12. BID23 observed an optimal batch size B opt which maximized the test set accuracy at constant learning rate. They argued that this optimal batch size arises when the noise scale g ≈ N/B is also optimal, and supported this claim by demonstrating empirically that B opt ∝ N. Earlier, BID8 exploited a linear scaling rule between batch size and learning rate to train ResNet-50 on ImageNet in one hour with batches of 8192 images. These indicate that gradient noise can be beneficial, especially in non-convex optimization. It has been proposed that noise helps SGD escape "sharp minima" which generalize poorly BID9 BID4 BID12 BID23. Given these , it is unclear to the present authors whether equations 1 and 2 are relevant in deep learning. Supporting this view, we note that most researchers employ early stopping BID20, whereby we intentionally prevent the network from reaching a minimum. Nonetheless, decaying learning rates are empirically successful. To understand this, we note that introducing random fluctuations whose scale falls during training is also a well established technique in non-convex optimization; simulated annealing. The initial noisy optimization phase allows us to explore a larger fraction of the parameter space without becoming trapped in local minima. Once we have located a promising region of parameter space, we reduce the noise to fine-tune the parameters. Finally, we note that this interpretation may explain why conventional learning rate decay schedules like square roots or exponential decay have become less popular in deep learning in recent years. Increasingly, researchers favor sharper decay schedules like cosine decay BID16 or step-function drops BID29. To interpret this shift, we note that it is well known in the physical sciences that slowly annealing the temperature (noise scale) helps the system to converge to the global minimum, which may be sharp. Meanwhile annealing the temperature in a series of discrete steps can trap the system in a "robust" minimum whose cost may be higher but whose curvature is lower. We suspect a similar intuition may hold in deep learning. Many researchers no longer use vanilla SGD, instead preferring SGD with momentum. extended their analysis of SGD to include momentum, and found that the "noise scale", DISPLAYFORM0 This reduces to the noise scale of vanilla SGD when the momentum coefficient m → 0. Intuitively, ef f = /(1 − m) is the effective learning rate. They proposed to reduce the number of parameter updates required to train a model by increasing the learning rate and momentum coefficient, while simultaneously scaling B ∝ /(1 − m). We find that increasing the learning rate and scaling B ∝ performs well. However increasing the momentum coefficient while scaling B ∝ 1/(1−m) slightly reduces the test accuracy. To analyze this observation, consider the momentum update equations, DISPLAYFORM1 DISPLAYFORM2 A is the "accumulation", while dĈ dω is the mean gradient per training example, estimated on a batch of size B. In Appendix A we analyze the growth of the accumulation at the start of training. This variable tracks the exponentially decaying average of gradient estimates, but initially it is initialized to zero. We find that the accumulation grows in exponentially towards its steady state value over a "timescale" of approximately B/(N (1 − m)) training epochs. During this time, the magnitude of the parameter updates ∆ω is suppressed, reducing the rate of convergence. Consequently when training at high momentum one must introduce additional epochs to allow the dynamics to catch up. Furthermore, when we increase the momentum coefficient we increase the timescale required for the accumulation to forget old gradients (this timescale is also ∼ B/(N (1 − m))). Once this timescale becomes several epochs long, the accumulation cannot adapt to changes in the loss landscape, impeding training. This is likely to be particularly problematic at points where the noise scale decays. BID13 proposed initialization bias correction, whereby the learning rate is increased at early times to compensate the suppressed initial value of the accumulation. However when the batch size is large, we found that this often causes instabilities during the early stages of training. We note that BID8 recommended a reduced learning rate for the first few epochs. In section 5.1, we demonstrate that decreasing the learning rate and increasing the batch size during training are equivalent. In section 5.2, we show we can further reduce the number of parameter updates by increasing the effective learning rate and scaling the batch size. In section 5.3 we apply our insights to train Inception-ResNet-V2 on ImageNet, using vast batches of up to 65536 images. Finally in section 5.4, we train ResNet-50 to 76.1% ImageNet validation accuracy within 30 minutes. Our first experiments are performed on CIFAR-10, using a "16-4" wide ResNet architecture, following the implementation of BID29. We use ghost batch norm BID10, with a ghost batch size of 128. This ensures the mean gradient is independent of batch size, as required by the analysis of BID23. To demonstrate the equivalence between decreasing the learning rate and increasing the batch size, we consider three different training schedules, as shown in figure 1. "Decaying learning rate" follows the original implementation; the batch size is constant, while the learning rate repeatedly decays by a factor of 5 at a sequence of "steps". "Hybrid" holds the learning rate constant at the first step, instead increasing the batch size by a factor of 5. However after this first step, the batch size is constant and the learning rate decays by a factor of 5 at each subsequent step. This schedule mimics how one might proceed if hardware imposes a limit on the maximum achievable batch size. In "Increasing batch size", we hold the learning rate constant throughout training, and increase the batch size by a factor of 5 at every step. If the learning rate itself must decay during training, then these schedules should show different learning curves (as a function of the number of training epochs) and reach different final test set accuracies. Meanwhile if it is the noise scale which should decay, all three schedules should be indistinguishable. We plot the evolution of the training set cross entropy in figure 2a, where we train using SGD with momentum and a momentum parameter of 0.9. The three training curves are almost identical, despite showing marked drops as we pass through the first two steps (where the noise scale is reduced). These suggest that it is the noise scale which is relevant, not the learning rate. To emphasize the potential benefits of increasing the batch size, we replot the training cross-entropy in figure 2b, but as a function of the number of parameter updates rather than the number of epochs. While all three schedules match up to the first "step", after this point increasing the batch size dramatically reduces the number of parameter updates required to train the model. Finally, to confirm that our alternative learning schedules generalize equally well to the test set, in figure 3a we exhibit the test set accuracy, as a function of the number of epochs (so each curve can be directly compared). Once again, the three schedules are almost identical. We conclude that we can achieve all of the benefits of decaying the learning rate in these experiments by instead increasing the batch size. We present additional to establish that our proposal holds for a range of optimizers, all using the schedules presented in FIG0. In figure 3b, we present the test set accuracy, when training with Nesterov momentum BID19 and momentum parameter 0.9, observing three nearidentical curves. In FIG2, we repeat the same experiment with vanilla SGD, again obtaining three highly similar curves (In this case, there is no clear benefit of decaying the learning rate after the first step). Finally in FIG2 we repeat the experiment with Adam BID13. We Test accuracy as a function of the number of parameter updates. " Increasing batch size" replaces learning rate decay by batch size increases. "Increased initial learning rate" additionally increases the initial learning rate from 0.1 to 0.5. Finally "Increased momentum coefficient" also increases the momentum coefficient from 0.9 to 0.98.use the default parameter settings of TensorFlow, such that the initial base learning rate here was 10 −3, β 1 = 0.9 and β 2 = 0.999. Thus the learning rate schedule is obtained by dividing figure 1a by 10 −2. Remarkably, even here the three curves closely track each other. We now focus on our secondary objective; minimizing the number of parameter updates required to train a model. As shown above, the first step is to replace decaying learning rates by increasing batch sizes. We show here that we can also increase the effective learning rate ef f = /(1 − m) at the start of training, while scaling the initial batch size B ∝ ef f. All experiments are conducted using SGD with momentum. There are 50000 images in the CIFAR-10 training set, and since the scaling rules only hold when B N, we decided to set a maximum batch size B max = 5120.We consider four training schedules, all of which decay the noise scale by a factor of five in a series of three steps. " Original training schedule" follows the implementation of BID29, using an initial learning rate of 0.1 which decays by a factor of 5 at each step, a momentum coefficient of 0.9, and a batch size of 128. "Increasing batch size" also uses a learning rate of 0.1, initial batch size of 128 and momentum coefficient of 0.9, but the batch size increases by a factor of 5 at each step. These schedules are identical to "Decaying learning rate" and "Increasing batch size" in section 5.1 above. "Increased initial learning rate" also uses increasing batch sizes during training, but additionally uses an initial learning rate of 0.5 and an initial batch size of 640. Finally "Increased momentum coefficient" combines increasing batch sizes during training and the increased initial learning rate of 0.5, with an increased momentum coefficient of 0.98, and an initial batch size of 3200. Note that we only increase the batch size until it reaches B max, after this point we achieve subsequent decays in noise scale by decreasing the learning rate. We emphasize that, as in the previous section, all four schedules require the same number of training epochs. We plot the evolution of the test set accuracy in FIG3, as a function of the number of parameter updates. Our implementation of the original training schedule requires ∼80000 updates, and reaches a final test accuracy of 94.3% (the original paper reports 95% accuracy, which we have not been able to replicate). " Increasing batch size" requires ∼29000 updates, reaching a final accuracy of 94.4%. "Increased initial learning rate" requires under 6500 updates, reaching a final accuracy of 94.5%. Finally, "Increased momentum coefficient" requires less than 2500 parameter updates, but reaches a lower test accuracy of 93.3%. Across five additional training runs for each schedule, the median accuracies were 94.3%, 94.2%, 94.2% and 93.5% respectively. We discussed a potential explanation for the performance drop when training with large momentum coefficients in section 4. We provide additional in appendix B, varying the initial learning rate between 0.1 and 3.2 while holding the batch size constant. We find that the test accuracy falls for initial learning rates larger than ∼0.4.(a) (b) Figure 6: Inception-ResNet-V2 on ImageNet. Increasing the batch size during training achieves similar to decaying the learning rate, but it reduces the number of parameter updates from just over 14000 to below 6000. We run each experiment twice to illustrate the variance. We now apply our insights to reduce the number of parameter updates required to train ImageNet. BID8 trained a ResNet-50 on ImageNet in one hour, reaching 76.3% validation accuracy. To achieve this, they used batches of 8192, with an initial learning rate of 3.2 and a momentum coefficient of 0.9. They completed 90 training epochs, decaying the learning rate by a factor of ten at the 30th, 60th and 80th epoch. ImageNet contains around 1.28 million images, so this corresponds to ∼14000 parameter updates. They also introduced a warm-up phase at the start of training, in which the learning rate and batch size was gradually increased. We also train for 90 epochs and follow the same schedule, decaying the noise scale by a factor of ten at the 30th, 60th and 80th epoch. However we did not include a warm-up phase. To set a stronger baseline, we replaced ResNet-50 by Inception-ResNet-V2 BID24. Initially we used a ghost batch size of 32. In figure 6, we train with a learning rate of 3.0 and a momentum coefficient of 0.9. The initial batch size was 8192. For "Decaying learning rate", we hold the batch size fixed and decay the learning rate, while in "Increasing batch size" we increase the batch size to 81920 at the first step, but decay the learning rate at the following two steps. We repeat each schedule twice, and find that all four runs exhibit a very similar evolution of the test set accuracy during training. The final accuracies of the two "Decaying learning rate" runs are 78.7% and 77.8%, while the final accuracy of the two "Increasing batch size" runs are 78.1% and 76.8%. Although there is a slight drop, the difference in final test accuracies is similar to the variance between training runs. Increasing the batch size reduces the number of parameter updates during training from just over 14000 to below 6000. Note that the training curves appear unusually noisy because we reduced the number of test set evaluations to reduce the model training time. BID8 already increased the learning rate close to its maximum stable value. To further reduce the number of parameter updates we must increase the momentum coefficient. We introduce a maximum batch size, B max = 2 16 = 65536. This ensures B N, and it also improved the stability of our distributed training. We also increased the ghost batch size to 64, matching the batch size of our GPUs and reducing the training time. We compare three different schedules, all of which have the same base schedule, decaying the noise scale by a factor of ten at the 30th, 60th and 80th epoch. We use an initial learning rate of 3 throughout. " Momentum 0.9" uses an initial batch size of 8192, "Momentum 0.975" uses an initial batch size of 16384, and "Momentum 0.9875" uses an initial batch size of 32768. For all schedules, we decay the noise scale by increasing the batch size until reaching B max, and then decay the learning rate. We plot the test set accuracy in figure 7. "Momentum 0.9" achieves a final accuracy of 78.8% in just under 6000 updates. We performed two runs of "Momentum 0.95", achieving final accuracies of 78.1% and 77.8% in under 3500 updates. Finally "Momentum 0.975" achieves final accuracies of 77.5% and 76.8% in under 2500 updates. Figure 7: Inception-ResNet-V2 on ImageNet. Increasing the momentum parameter reduces the number of parameter updates required, but it also leads to a small drop in final test accuracy. To confirm that increasing the batch size during training can reduce model training times, we replicated the set-up described by BID8 on a half TPU pod, comprising 256 tensorcores BID11. Using tensorFlow, we first train ResNet-50 for 90 epochs to 76.1% validation set accuracy in under 45 minutes, utilising batches of 8192 images. To utilise the full TPU pod, we then increase the batch size after the first 30 epochs to 16384 images, and achieve the same validation accuracy of 76.1% in under 30 minutes. The last 60 epochs and the first 30 epochs both take just under 15 minutes, demonstrating near-perfect scaling efficiency across the pod, such that the number of parameter updates provides a meaningful measure of the training time. To our knowledge, this is the first procedure which has reduced the training time of BID8 without sacrificing final validation accuracy BID28 BID0. By contrast, doubling the initial learning rate and using batches of 16384 images throughout training achieves a lower validation set accuracy of 75.0% in 22 minutes, demonstrating that increasing the batch size during training is crucial to the performance gains above. These show that the ideas presented in this paper will become increasingly important as new hardware for large-batch training becomes available. This paper extends the analysis of SGD in BID23 to include decaying learning rates. BID17 also interpreted SGD as a stochastic differential equation, in order to discuss how SGD could be modified to perform approximate Bayesian posterior sampling. However they state that their analysis holds only in the neighborhood of a minimum, while BID12 showed that the beneficial effects of noise are most pronounced at the start of training. BID15 proposed the use of control theory to set the learning rate and momentum coefficient. BID8 observed a linear scaling rule between batch size and learning rate, B ∝, and used this rule to reduce the time required to train ResNet-50 on ImageNet to one hour. To our knowledge, this scaling rule was fist adopted by BID14. BID2 (section 4.2) demonstrated that SGD converges to strongly convex minima in similar numbers of training epochs if B ∝. BID10 proposed an alternative scaling rule, B ∝ √. BID27 proposed Layer-wise Adaptive Rate Scaling (LARS), which applies different learning rates to different parameters in the network, and used it to train ImageNet in 14 minutes BID28, albeit to a lower final accuracy of 74.9%. K-FAC BID18 is also gaining popularity as an efficient alternative to SGD. BID26 argued that adaptive optimization methods tend to generalize less well than SGD and SGD with momentum (although they did not include K-FAC in their study), while our work reduces the gap in convergence speed. Asynchronous-SGD is another popular strategy, which enables the use of multiple GPUs even when batch sizes are small BID21 BID6. We do not consider asynchronous-SGD in this work, since the scaling rules enabled us to use batch sizes on the order of the training set size. We can often achieve the benefits of decaying the learning rate by instead increasing the batch size during training. We support this claim with experiments on CIFAR-10 and ImageNet, and with a range of optimizers including SGD, Momentum and Adam. Our findings enable the efficient use of vast batch sizes, significantly reducing the number of parameter updates required to train a model. This has the potential to dramatically reduce model training times. We further increase the batch size B by increasing the learning rate and momentum parameter m, while scaling B ∝ /(1 − m). Combining these strategies, we train Inception-ResNet-V2 on ImageNet to 77% validation accuracy in under 2500 parameter updates, using batches of 65536 images. We also exploit increasing batch sizes to train ResNet-50 to 76.1% ImageNet validation set accuracy on TPU in under 30 minutes. Most strikingly, we achieve this without any hyper-parameter tuning, since our scaling rules enable us to directly convert existing hyper-parameter choices from the literature for large batch training. The update equations for SGD with momentum, DISPLAYFORM0 DISPLAYFORM1 A is the "accumulation" variable, while dĈ dω is the mean gradient per training example, estimated on a batch of size B. We initialize the accumulation to zero, and it takes a number of updates for the magnitude of the accumulation to "grow in". During this time, the size of the parameter updates ∆ω is suppressed, reducing the effective learning rate. We can model the growth of the accumulation by assuming that the gradient at the start of training is approximately constant, such that N epochs denotes the number of training epochs performed. The accumulation variable grows in exponentially, and consequently we can estimate the effective number of "lost" training epochs, DISPLAYFORM2 Since the batch size B ∝ /(1 − m), we find N lost ∝ /(N (1 − m) 2 ). We must either introduce additional training epochs to compensate, or ensure that the number of lost training epochs is negligible, when compared to the total number of training epochs performed before the decaying the noise scale. Note that N lost rises most rapidly when one increases the momentum coefficient. We exhibit the test accuracy of our "16-4" wide ResNet implementation on CIFAR10 in FIG7, as a function of the initial learning rate. For learning rate = 0.1, the batch size B = 128 is constant throughout training. This matches the "Original training schedule" of section 5.2 of the main text. When we increase the learning rate we scale B ∝ and perform the same number of training epochs. | Decaying the learning rate and increasing the batch size during training are equivalent. | 1,254 | scitldr |
Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning, using rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we introduce a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state of the art with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1. Existing state-of-the-art question answering models are trained to produce exact answer spans for a question and a document. In this setting, a ground truth answer used to supervise the model is defined as a start and an end position within the document. Existing training approaches optimize using cross entropy loss over the two positions. However, this suffers from a fundamental disconnect between the optimization, which is tied to the position of a particular ground truth answer span, and the evaluation, which is based on the textual content of the answer. This disconnect is especially harmful in cases where answers that are textually similar to, but distinct in positions from, the ground truth are penalized in the same fashion as answers that are textually dissimilar. For example, suppose we are given the sentence "Some believe that the Golden State Warriors team of 2017 is one of the greatest teams in NBA history", the question "which team is considered to be one of the greatest teams in NBA history", and a ground truth answer of "the Golden State Warriors team of 2017". The span "Warriors" is also a correct answer, but from the perspective of traditional cross entropy based training it is no better than the span "history".To address this problem, we propose a mixed objective that combines traditional cross entropy loss over positions with a measure of word overlap trained with reinforcement learning. We obtain the latter objective using self-critical policy learning in which the reward is based on word overlap between the proposed answer and the ground truth answer. Our mixed objective brings two benefits: (i) the reinforcement learning objective encourages answers that are textually similar to the ground truth answer and discourages those that are not; (ii) the cross entropy objective significantly facilitates policy learning by encouraging trajectories that are known to be correct. The ing objective is one that is both faithful to the evaluation metric and converges quickly in practice. In addition to our mixed training objective, we extend the Dynamic Coattention Network (DCN) by with a deep residual coattention encoder. This allows the network to build richer representations of the input by enabling each input sequence to attend to previous attention contexts. BID26 show that the stacking of attention layers helps model long-range DISPLAYFORM0 Figure 1: Deep residual coattention encoder.dependencies. We merge coattention outputs from each layer by means of residual connections to reduce the length of signal paths. BID6 show that skip layer connections facilitate signal propagation and alleviate gradient degradation. The combination of the deep residual coattention encoder and the mixed objective leads to higher performance across question types, question lengths, and answer lengths on the Stanford Question Answering Dataset (SQuAD) BID20 compared to our DCN baseline. The improvement is especially apparent on long questions, which require the model to capture long-range dependencies between the document and the question. Our model, which we call DCN+, achieves state-of-the-art on SQuAD, with 75.1% exact match accuracy and 83.1% F1. When ensembled, the DCN+ obtains 78.9% exact match accuracy and 86.0% F1. We consider the question answering task in which we are given a document and a question, and are asked to find the answer in the document. Our model is based on the DCN by, which consists of a coattention encoder and a dynamic decoder. The encoder first encodes the question and the document separately, then builds a codependent representation through coattention. The decoder then produces a start and end point estimate given the coattention. The DCN decoder is dynamic in the sense that it iteratively estimates the start and end positions, stopping when estimates between iterations converge to the same positions or when a predefined maximum number of iterations is reached. We make two significant changes to the DCN by introducing a deep residual coattention encoder and a mixed training objective that combines cross entropy loss from maximum likelihood estimation and reinforcement learning rewards from self-critical policy learning. Because it only has a single-layer coattention encoder, the DCN is limited in its ability to compose complex input representations. BID26 proposed stacked self-attention modules to facilitate signal traversal. They also showed that the network's ability to model long-range dependencies can be improved by reducing the length of signal paths. We propose two modifications to the coattention encoder to leverage these findings. First, we extend the coattention encoder with self-attention by stacking coattention layers. This allows the network to build richer representations over the input. Second, we merge coattention outputs from each layer with residual connections. This reduces the length of signal paths. Our encoder is shown in Figure 1.Suppose we are given a document of m words and a question of n words. Let L D ∈ R e×m and L Q ∈ R e×n respectively denote the word embeddings for the document and the question, where e is the dimension of the word embeddings. We obtain document encodings E D 1 and question DISPLAYFORM0 DISPLAYFORM1 Here, h denotes the hidden state size and the +1 indicates the presence of an additional sentinel word which allows the coattention to not focus on any part of the input. Like the original DCN, we add a non-linear transform to the question encoding. We compute the affinity matrix between the document and the question as DISPLAYFORM2. Let softmax (X) denote the softmax operation over the matrix X that normalizes X column-wise. The document summary vectors and question summary vectors are computed as DISPLAYFORM3 DISPLAYFORM4 We define the document coattention context as follows. Note that we drop the dimension corresponding to the sentinel vector -it has already been used during the summary computation and is not a potential position candidate for the decoder. DISPLAYFORM5 We further encode the summaries using another bidirectional LSTM. DISPLAYFORM6 Equation FORMULA4 to equation 5 describe a single coattention layer. We compute the second coattention layer in a similar fashion. Namely, let coattn denote a multi-valued mapping whose inputs are the two input sequences E D 1 and E Q 1. We have DISPLAYFORM7 The output of our encoder is then obtained as DISPLAYFORM8 where concat (A, B) denotes the concatenation between the matrices A and B along the first dimension. This encoder is different than the original DCN in its depth and its use of residual connections. We use not only the output of the deep coattention network C. This is akin to transformer networks BID26, which achieved stateof-the-art on machine translation using deep self-attention layers to help model long-range dependencies, and residual networks BID6, which achieved state-of-the-art in image classification through the addition of skip layer connections to facilitate signal propagation and alleviate gradient degradation. The DCN produces a distribution over the start position of the answer and a distribution over the end position of the answer. Let s and e denote the respective start and end points of the ground truth answer. Because the decoder of the DCN is dynamic, we denote the start and end distributions produced at the tth decoding step by p start t ∈ R m and p end t ∈ R m. For convenience, we denote the greedy estimate of the start and end positions by the model at the tth decoding step by s t and e t. Moreover, let Θ denote the parameters of the model. Similar to other question answering models, the DCN is supervised using the cross entropy loss on the start position distribution and the end position distribution: DISPLAYFORM9 Equation FORMULA10 states that the model accumulates a cross entropy loss over each position during each decoding step given previous estimates of the start and end positions. The question answering task consists of two evaluation metrics. The first, exact match, is a binary score that denotes whether the answer span produced by the model has exact string match with the ground truth answer span. The second, F1, computes the degree of word overlap between the answer span produced by the model and the ground truth answer span. We note that there is a disconnect between the cross entropy optimization objective and the evaluation metrics. For example, suppose we are given the answer estimates A and B, neither of which match the ground truth positions. However, A has an exact string match with the ground truth answer whereas B does not. The cross entropy objective penalizes A and B equally, despite the former being correct under both evaluation metrics. In the less extreme case where A does not have exact match but has some degree of word overlap with the ground truth, the F1 metric still prefers A over B despite its wrongly predicted positions. We encode this preference using reinforcement learning, using the F1 score as the reward function. DISPLAYFORM10 denote the sampled start and end positions from the estimated distributions at decoding step t. We define a trajectoryτ as a sequence of sampled start and end pointsŝ t andê t through all T decoder time steps. The reinforcement learning objective is then the negative expected rewards R over trajectories. DISPLAYFORM11 We use F 1 to denote the F1 scoring function and ans (s, e) to denote the answer span retrieved using the start point s and end point e. In equation 13, instead of using only the F1 word overlap as the reward, we subtract from it a baseline. BID4 show that a good baseline reduces the variance of gradient estimates and facilitates convergence. In our case, we employ a self-critic BID10 ) that uses the F1 score produced by the current model during greedy inference without teacher forcing. For ease of notation, we abbreviate R (s, e,ŝ T,ê T ; Θ) as R. As per BID25 and BID21, the expected gradient of a non-differentiable reward function can be computed as In equation 16, we approximate the expected gradient using a single Monte-Carlo sample τ drawn from p τ. This sample trajectory τ contains the start and end positionsŝ t andê t sampled during all decoding steps. DISPLAYFORM12 DISPLAYFORM13 One of the key problems in applying RL to natural language processing is the discontinuous and discrete space the agent must explore in order to find a good policy. For problems with large exploration space, RL approaches tend to be applied as fine-tuning steps after a maximum likelihood model has already been trained BID18 BID30. The ing model is constrained in its exploration during fine-tuning because it is biased by heavy pretraining. We instead treat the optimization problem as a multi-task learning problem. The first task is to optimize for positional match with the ground truth answer using the the cross entropy objective. The second task is to optimize for word overlap with the ground truth answer with the self-critical reinforcement learning objective. In a similar fashion to BID8, we combine the two losses using homoscedastic uncertainty as task-dependent weightings. DISPLAYFORM14 Here, σ ce and σ rl are learned parameters. The gradient of the cross entropy objective can be derived using straight-forward backpropagation. The gradient of the self-critical reinforcement learning objective is shown in equation 16. FIG0 illustrates how the mixed objective is computed. In practice, we find that adding the cross entropy task significantly facilitates policy learning by pruning the space of candidate trajectories -without the former, it is very difficult for policy learning to converge due to the large space of potential answers, documents, and questions. We train and evaluate our model on the Stanford Question Answering Dataset (SQuAD). We show our test performance of our model against other published models, and demonstrate the importance of our proposals via ablation studies on the development set. To preprocess the corpus, we use the reversible tokenizer from Stanford CoreNLP. For word embeddings, we use GloVe embeddings pretrained on the 840B Common Crawl corpus BID19 as well as character ngram embeddings by BID5. In addition, we concatenate these embeddings with context vectors (CoVe) trained on WMT BID15. For out of vocabulary words, we set the embeddings and context vectors to zero. We perform word dropout on the document which zeros a word embedding with probability 0.075. In addition, we swap the first maxout layer of the highway maxout network in the DCN decoder with a sparse mixture of experts layer. This layer is similar to the maxout layer, except instead of taking the top scoring expert, we take the top k = 2 expert. The model is trained using ADAM with default hyperparameters. Hyperparameters of our model are identical to the DCN. We implement our model using PyTorch. BID12, BiDAF BID22, DCN w/ CoVe BID15, ReasoNet BID24, Document Reader, FastQA BID29, DCN. The CoVe authors did not submit their model, which we use as our baseline, for SQuAD test evaluation. The performance of our model is shown in BID15. This model is identical to the DCN by, except that it augments the word representations with context vectors trained on WMT16.Comparison to baseline DCN with CoVe. DCN+ outperforms the baseline by 3.2% exact match accuracy and 3.2% F1 on the SQuAD development set. FIG2 shows the consistent performance gain of DCN+ over the baseline across question types, question lengths, and answer lengths. In particular, DCN+ provides a significant advantage for long questions. Ablation study. The contributions of each part of our model are shown in Table 2. We note that the deep residual coattention yielded the highest contribution to model performance, followed by the mixed objective. The sparse mixture of experts layer in the decoder added minor improvements to the model performance. 71.3% -3.2% 79.9% -3.2% Table 2: Ablation study on the development set of SQuAD. Figure 4: Training curve of DCN+ with and without reinforcement learning. In the latter case, only the cross entropy objective is used. The mixed objective initially performs worse as it begins policy learning from scratch, but quickly outperforms the cross entropy model. Mixed objective convergence. The training curves for DCN+ with reinforcement learning and DCN+ without reinforcement learning are shown in Figure 4 to illustrate the effectiveness of our proposed mixed objective. In particular, we note that without mixing in the cross entropy loss, it is extremely difficult to learn the policy. When we combine the cross entropy loss with the reinforcement learning objective, we find that the model initially performs worse early on as it begins policy learning from scratch (shown in Figure 4b). However, with the addition of cross entropy loss, the model quickly learns a reasonable policy and subsequently outperforms the purely cross entropy model (shown in Figure 4a).Sample predictions. FIG4 compares predictions by DCN+ and by the baseline on the development set. Both models retrieve answers that have sensible entity types. For example, the second example asks for "what game" and both models retrieve an American football game; the third example asks for "type of Turing machine" and both models retrieve a type of turing machine. We find, however, that DCN+ consistently make less mistakes on finding the correct entity. This is especially apparent in the examples we show, which contain several entities or candidate answers of the correct type. In the first example, Gasquet wrote about the plague and called it "Great Pestilence". While he likely did think of the plague as a "great pestilence", the phrase "suggested that it would appear to be some form of ordinary Eastern or bubonic plague" provides evidence for the correct answer -"some form of ordinary Eastern or bubonic plague". Similarly, the second example states that Thomas Davis was injured in the "NFC Championship Game", but the game he insisted on playing in is the "Super Bowl". Finally, "multi-tape" and "single-tape" both appear in the sentence that provides provenance for the answer to the question. However, it is the "single-tape" Turing machine that implies quadratic time. In these examples, DCN+ finds the correct entity out of ones that have the right type whereas the baseline does not. Neural models for question answering. Current state-of-the-art approaches for question answering over unstructured text tend to be neural approaches. BID28 proposed one of the first conditional attention mechanisms in the Match-LSTM encoder. Coattention, bidirectional attention flow BID22, and self-matching attention all build codependent representations of the question and the document. These approaches of conditionally encoding two sequences are widely used inThe historian Francis Aidan Gasquet wrote about the'Great Pestilence' in 1893 and suggested that "it would appear to be some form of the ordinary Eastern or bubonic plague". He was able to adopt the epidemiology of the bubonic plague for the Black Death for the second edition in 1908, implicating rats and fleas in the process, and his interpretation was widely accepted for other ancient and medieval epidemics, such as the Justinian plague that was prevalent in the Eastern Roman Empire from 541 to 700 CE.What did Gasquet think the plague was?Carolina suffered a major setback when Thomas Davis, an 11-year veteran who had already overcome three ACL tears in his career, went down with a broken arm in the NFC Championship Game. Despite this, he insisted he would still find a way to play in the Super Bowl. His prediction turned out to be accurate. What game did Thomas Davis say he would play in, despite breaking a bone earlier on?But bounding the computation time above by some concrete function f(n) often yields complexity classes that depend on the chosen machine model. For instance, the language {xx | x is any binary string} can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" (Goldreich 2008, Chapter 1.2). This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP.A language solved in quadratic time implies the use of what type of Turing machine? question answering. After building codependent encodings, most models predict the answer by generating the start position and the end position corresponding to the estimated answer span. The generation process utilizes a pointer network BID27 over the positions in the document. also introduced the dynamic decoder, which iteratively proposes answers by alternating between start position and end position estimates, and in some cases is able to recover from initially erroneous predictions. Neural attention models. Neural attention models saw early adoption in machine translation BID0 and has since become to de-facto architecture for neural machine translation models. Self-attention, or intra-attention, has been applied to language modeling, sentiment analysis, natural language inference, and abstractive text summarization BID18. BID26 extended this idea to a deep self-attentional network which obtained state-of-the-art in machine translation. Coattention, which builds codependent representations of multiple inputs, has been applied to visual question answering BID13. introduced coattention for question answering. Bidirectional attention flow BID22 and self-matching attention also build codependent representations between the question and the document. Reinforcement learning in NLP. Many tasks in natural language processing have evaluation metrics that are not differentiable. BID3 proposed a hierarchical reinforcement learning technique for generating text in a simulated way-finding domain. BID17 applied deep Q-networks to learn policies for text-based games using game rewards as feedback. BID11 introduced a neural conversational model trained using policy gradient methods, whose reward function consisted of heuristics for ease of answering, information flow, and semantic coherence. BID1 proposed a general actor-critic temporal-difference method for sequence prediction, performing metric optimization on language modeling and machine translation. Direct word overlap metric optimization has also been applied to summarization BID18, and machine translation BID30. We introduced DCN+, an state-of-the-art question answering model with deep residual coattention trained using a mixed objective that combines cross entropy supervision with self-critical policy learning. We showed that our proposals improve model performance across question types, question lengths, and answer lengths on the Stanford Question Answering Dataset (SQuAD). On SQuAD, the DCN+ achieves 75.1% exact match accuracy and 83.1% F1. When ensembled, the DCN+ obtains 78.9% exact match accuracy and 86.0% F1. | We introduce the DCN+ with deep residual coattention and mixed-objective RL, which achieves state of the art performance on the Stanford Question Answering Dataset. | 1,255 | scitldr |
Neural architecture search (NAS) has achieved breakthrough success in a great number of applications in the past few years. It could be time to take a step back and analyze the good and bad aspects in the field of NAS. A variety of algorithms search architectures under different search space. These searched architectures are trained using different setups, e.g., hyper-parameters, data augmentation, regularization. This raises a comparability problem when comparing the performance of various NAS algorithms. NAS-Bench-101 has shown success to alleviate this problem. In this work, we propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, on multiple datasets, and more diagnostic information. NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms. The design of our search space is inspired by the one used in the most popular cell-based searching algorithms, where a cell is represented as a directed acyclic graph. Each edge here is associated with an operation selected from a predefined operation set. For it to be applicable for all NAS algorithms, the search space defined in NAS-Bench-201 includes all possible architectures generated by 4 nodes and 5 associated operation options, which in 15,625 neural cell candidates in total. The training log using the same setup and the performance for each architecture candidate are provided for three datasets. This allows researchers to avoid unnecessary repetitive training for selected architecture and focus solely on the search algorithm itself. The training time saved for every architecture also largely improves the efficiency of most NAS algorithms and presents a more computational cost friendly NAS community for a broader range of researchers. We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms. In further support of the proposed NAS-Bench-102, we have analyzed it from many aspects and benchmarked 10 recent NAS algorithms, which verify its applicability. The deep learning community is undergoing a transition from hand-designed neural architecture (; ; to automatically designed neural architecture (; ; b;). In its early era, the great success of deep learning was promoted by novel neural architectures, such as ResNet , Inception, VGGNet , and Transformer . However, manually designing one architecture requires human experts to try numerous different operation and connection choices . In contrast to architectures that are manually designed, those automatically found by neural architecture search (NAS) algorithms require much less human interaction and expert effort. These NAS-generated architectures have shown promising in many domains, such as image recognition (; ;, sequence modeling (; b;), etc. Recently, a variety of NAS algorithms have been increasingly proposed. While these NAS methods are methodically designed and show promising improvements, many setups in their algorithms are different. Different search space is utilized, e.g., different macro skeletons of the whole architecture ) and a different operation set for the micro cell within the skeleton , etc. After a good architecture is selected, various strategies can be employed to train this architecture and report the performance, e.g., different data augmentation (;, different regularization, different scheduler, and different selections of hyper-parameters (; a). The validation set for testing the performance of the selected architecture is not split in the same way . These discrepancies raise a comparability problem when comparing the performance of various NAS algorithms, making it difficult to conclude their contributions. In response to this problem, NAS-Bench-101 and NAS-HPO-Bench are proposed. However, some NAS algorithms can not be applied directly on NASBench-101, and NAS-HPO-Bench only has 144 candidate architectures, which maybe insufficient to evaluate NAS algorithms. To extend these two benchmarks and towards better reproducibility of NAS methods 1, we propose NAS-Bench-201 with a fixed cell search space, inspired from the search space used in the most popular neural cell-based searching algorithms ). As shown in Figure 1, each architecture consists of a predefined skeleton with a stack of the searched cell. In this way, architecture search is transformed into the problem of searching a good cell. Each cell is represented as a densely-connected directed acyclic graph (DAG) as shown in the bottom section of Figure 1. Here the node represents the sum of the feature maps and each edge is associated with an operation transforming the feature maps from the source node to the target node. The size of the search space is related to the number of nodes defined for the DAG and the size of the operation set. In NAS-Bench-201, we choose 4 nodes and 5 representative operation candidates for the operation set, which generates a total search space of 15,625 cells/architectures. Each architecture is trained multiple times on three different datasets. The training log and performance of each architecture are provided for each run. The training accuracy/test accuracy/training loss/test loss after every training epoch for each architecture plus the number of parameters and floating point operations (FLOPs) are accessible. Hopefully, NAS-Bench-201 will show its value in the field of NAS research. It provides a unified benchmark for most up-to-date NAS algorithms including all cell-based NAS methods. With NASBench-201, researchers can focus on designing robust searching algorithm while avoiding tedious hyper-parameter tuning of the searched architecture. Thus, NAS-Bench-201 provides a relatively fair benchmark for the comparison of different NAS algorithms. It provides the full training log of each architecture. Unnecessary repetitive training procedure of each selected architecture can be avoided so that researchers can target on the essence of NAS, i.e., search algorithm. Another benefit is that the validation time for NAS largely decreases when testing in NAS-Bench-201, which provides a computational power friendly environment for more participations in NAS. It provides of each architecture on multiple datasets. The model transferability can be thoroughly evaluated for most NAS algorithms. In NAS-Bench-201, we provide systematic analysis of the proposed search space. We also evaluate 10 recent advanced NAS algorithms including reinforcement learning (RL)-based methods, evolutionary strategy (ES)-based methods, differentiable-based methods, etc. We hope our empirical analysis can bring some insights to the future designs of NAS algorithms. Our NAS-Bench-201 is algorithm-agnostic. Put simply, it is applicable to almost any up-to-date NAS algorithms. In this section, we will briefly introduce our NAS-Bench-201. The search space of NASBench-201 is inspired by cell-based NAS algorithms (Section 2.1). NAS-Bench-201 evaluates each architecture on three different datasets (Section 2.2). All implementation details of NAS-Bench-201 are introduced in Section 2.3. NAS-Bench-201 also provides some diagnostic information which can be used for potentially better designs of future NAS algorithms (discussed in Section 2.4). Macro Skeleton. Our search space follows the design of its counterpart as used in the recent neural cell-based NAS algorithms . As shown in the top of Figure 1, the skeleton is initiated with one 3-by-3 convolution with 16 output channels and a batch normalization layer . The main body of the skeleton includes three stacks of cells, connected by a residual block. Each cell is stacked N = 5 times, with the number of output channels as 16, 32 and 64 for the first, second and third stages, respectively. The intermediate residual block is the basic residual block with a stride of 2 , which serves to downsample the spatial size and double the channels of an input feature map. The shortcut path in this residual block consists of a 2-by-2 average pooling layer with stride of 2 and a 1-by-1 convolution. The skeleton ends up with a global average pooling layer to flatten the feature map into a feature vector. Classification uses a fully connected layer with a softmax layer to transform the feature vector into the final prediction. Searched Cell. Each cell in the search space is represented as a densely connected DAG. The densely connected DAG is obtained by assigning a direction from the i-th node to the j-th node (i < j) for each edge in an undirected complete graph. Each edge in this DAG is associated with an operation transforming the feature map from the source node to the target node. All possible operations are selected from a predefined operation set, as shown in Figure 1 (bottom-right). In our NAS-Bench-201, the predefined operation set O has L = 5 representative operations: zeroize, skip connection, 1-by-1 convolution, 3-by-3 convolution, and 3-by-3 average pooling layer. The convolution in this operation set is an abbreviation of an operation sequence of ReLU, convolution, and batch normalization. The DAG has V = 4 nodes, where each node represents the sum of all feature maps transformed through the associated operations of the edges pointing to this node. We choose V = 4 to allow the search space to contain basic residual block-like cells, which requires 4 nodes. Densely connected DAG does not restrict the searched topology of the cell to be densely connected, since we include zeroize in the operation set, which is an operation of dropping the associated edge. Besides, since we do not impose the constraint on the maximum number of edges , our search space is applicable to most NAS algorithms, including all cell-based NAS algorithms. We train and evaluate each architecture on CIFAR-10, CIFAR-100 , and ImageNet-16-120 . We choose these three datasets because CIFAR and ImageNet are the most popular image classification datasets. We split each dataset into training, validation and test sets to provide a consistent training and evaluation settings for previous NAS algorithms . Most NAS methods use the validation set to evaluate architectures after the architecture is optimized on the training set. The validation performance of the architectures serves as supervision signals to update the searching algorithm. The test set is to evaluate the performance of each searching algorithm by comparing the indicators (e.g., accuracy, model size, speed) of their selected architectures. Previous methods use different splitting strategies, which may in various searching costs and unfair comparisons. We hope to use the proposed splits to unify the training, validation and test sets for a fairer comparison. It is a standard image classification dataset and consists of 60K 32×32 colour images in 10 classes. The original training set contains 50K images, with 5K images per class. The original test set contains 10K images, with 1K images per class. Due to the need of validation set, we split all 50K training images in CIFAR-10 into two groups. Each group contains 25K images with 10 classes. We regard the first group as the new training set and the second group as the validation set. This dataset is just like CIFAR-10. It has the same images as CIFAR-10 but categorizes each image into 100 fine-grained classes. The original training set on CIFAR-100 has 50K images, and the original test set has 10K images. We randomly split the original test set into two group of equal size -5K images per group. One group is regarded as the validation set, and another one is regarded as the new test set. ImageNet-16-120: We build ImageNet-16-120 from the down-sampled variant of ImageNet (ImageNet16×16). As indicated in , down-sampling images in ImageNet can largely reduce the computation costs for optimal hyper-parameters of some classical models while maintaining similar searching . down-sampled the original ImageNet to 16×16 pixels to form ImageNet16×16, from which we select all images with label ∈ to construct ImageNet-16-120. In sum, ImageNet-16-120 contains 151.7K training images, 3K validation images, and 3K test images with 120 classes. By default, in this paper, "the training set", "the validation set", "the test set" indicate the new training, validation, and test sets, respectively. Training Architectures. In order to unify the performance of every architecture, we give the performance of every architecture in our search space. In our NAS-Bench-201, we follow previous ). We train each architecture with the same strategy, which is shown in Table 1. For simplification, we denote all hyperparameters for training a model as a set H, and we use H † to denote the values of hyper-parameter that we use. Specifically, we train each architecture via Nesterov momentum SGD, using the cross-entropy loss for 200 epochs in total. We set the weight decay as 0.0005 and decay the learning rate from 0.1 to 0 with a cosine annealing ). We use the same H † on different datasets, except for the data augmentation which is slightly different due to the image resolution. On CIFAR, we use the random flip with probability of 0.5, the random crop 32×32 patch with 4 pixels padding on each border, and the normalization over RGB channels. On ImageNet-16-120, we use a similar strategy but random crop 16×16 patch with 2 pixels padding on each border. Apart from using H † for all datasets, we also use a different hyper-parameter set H ‡ for CIFAR-10. It is similar to H † but its total number of training epochs is 12. In this way, we could provide bandit-based algorithms more options for the usage of short training budget (see more details in appendix). Metrics. We train each architecture with different random seeds on different datasets. We evaluate each architecture A after every training epoch. NAS-Bench-201 provides the training, validation, Table 2. Users can easily use our API to query the of each trial of A, which has negligible computational costs. In this way, researchers could significantly speed up their searching algorithm on these datasets and focus solely on the essence of NAS. We list the training/test loss/accuracies over different split sets on four datasets in Table 2. On CIFAR-10, we train the model on the training set and evaluate it on the validation set. We also train the model on the training and validation set and Table 3: We summarize some characteristics of NAS-Bench-101 and NAS-Bench-201. Our NASBench-201 can directly be applicable to almost any up-to-date NAS algorithms. In contrast, as pointed in , NAS algorithms based on parameter sharing or network morphisms cannot be directly evaluated on NAS-Bench-101. Besides, NAS-Bench-201 provides train/validation/test performance on three (one for NAS-Bench-101) different datasets so that the generality of NAS algorithms can be evaluated. It also provides some diagnostic information that may provide insights to design better NAS algorithms. evaluate it on the test set. These two paradigm follow the typical experimental setup on CIFAR-10 in previous literature (; ;). On CIFAR-100 and ImageNet-16-120, we train the model on the training set and evaluate it on both validation and test sets. Validation accuracy is a commonly used supervision signal for NAS. However, considering the expensive computational costs for evaluating the architecture, the signal is too sparse. In our NASBench-201, we also provide some diagnostic information which is some extra statistics obtained during training each architecture. Collecting these statistics almost involves no extra computation cost but may provide insights for better designs and training strategies of different NAS algorithms, such as platform-aware NAS , accuracy prediction , mutationbased NAS , etc. Architecture Computational Costs: NAS-Bench-201 provides three computation metrics for each architecture -the number of parameters, FLOPs, and latency. Algorithms that target on searching architectures with computational constraints, such as models on edge devices, can use these metrics directly in their algorithm designs without extra calculations. Fine-grained training and evaluation information. NAS-Bench-201 tracks the changes in loss and accuracy of every architecture after every training epochs. These fine-grained training and evaluation information shows the tendency of the architecture performance and could indicate some attributes of the model, such as the speed of convergence, the stability, the over-fitting or under-fitting levels, etc. These attributes may benefit the designs of NAS algorithms. Besides, some methods learn to predict the final accuracy of an architecture based on the of few early training epochs . These algorithm can be trained faster and the performance of the accuracy prediction can be evaluated using the fine-grained evaluation information. Parameters of optimized architecture. Our NAS-Bench-201 releases the trained parameters for each architecture. This can provide ground truth label for hypernetwork-based NAS methods , which learn to generate parameters of an architecture. Other methods mutate an architecture to become another one ). With NAS-Bench-201, researchers could directly use the off-the-shelf parameters instead of training from scratch and analyze how to transfer parameters from one architecture to another. To the best of our knowledge, NAS-Bench-101 is the only existing large-scale architecture dataset. Similar to NAS-Bench-201, NAS-Bench-101 also transforms the problem of architecture search into the problem of searching neural cells, represented as a DAG. Differently, NAS-Bench-101 defines operation candidates on the node, whereas we associate operations on the edge as inspired from (; b; . We summarize characteristics of our NAS-Bench-201 and NAS-Bench-101 in Table 3 . The main highlights of our NAS-Bench-201 are as follows. NAS-Bench-201 is algorithm-agnostic while NAS-Bench- 101 without any modification is only applicable to selected algorithms . The original complete search space, based on the nodes in NAS-Bench-101, is extremely huge. So, it is exceedingly difficult to efficiently traverse the training of all architectures. To trade off the computational cost and the size of the search space, they constrain the maximum number of edges in the DAG. However, it is difficult to incorporate this constraint in all NAS algorithms, such as NAS algorithms based on parameter-sharing . Therefore, many NAS algorithms cannot be directly evaluated on NAS-Bench-101. Our NAS-Bench-201 solves this problem by sacrificing the number of nodes and including all possible edges so that our search space is algorithm-agnostic. We provide extra diagnostic information, such as architecture computational cost, fine-grained training and evaluation time, etc., which give inspirations to better and efficient designs of NAS algorithms utilizing these diagnostic information. NAS-HPO-Bench hyper-parameter space for a simple 2-layer feed-forward network. Since NAS-HPO-Bench has only 144 architectures, it could be insufficient to evaluate different NAS algorithms. An overview of architecture performance. The performance of each architecture is shown in Figure 2. We show the test accuracy of every architecture in our search space in the left column of Figure 2. The training, validation and test accuracy with respect to the number of parameters are shown in the rest three columns, respectively. Results show that a different number of parameters will affect the performance of the architectures, which indicates that the choices of operations are essential in NAS. We also observe that the performance of the architecture can vary even when the number of parameters stays the same. This observation indicates the importance of how the operations/cells are connected. We compare the architectures with a clas-sical human-designed architecture (ResNet) in all cases, which is indicated by an orange star mark. ResNet shows competitive performance in three datasets, however, it still has room to improve, i.e., about 2% compared to the best architecture in CIFAR-100 and ImageNet-16-120, about 1% compared to the best one with the same amount of parameters in CIFAR-100 and ImageNet-16-120. Architecture ranking on three datasets. The ranking of every architecture in our search space is shown in Figure 3, where the architecture ranked in CIFAR-10 (x-axis) is ranked as in y-axis in CIFAR-100 and ImageNet-16-120, indicated by green and red markers respectively. The performance of the architectures shows a generally consistent ranking over the three datasets with slightly different variance, which serves to test the generality of the searching algorithm. Correlations of validation and test accuracies. We visualize the correlation between the validation and test accuracy within one dataset and across datasets in Figure 4. The correlation within one dataset is high compared to cross-dataset correlation. The correlation dramatically decreases as we only pick the top performing architectures. When we directly transfer the best architecture in one dataset to another (a vanilla strategy), it can not 100% secure a good performance. This phenomena is a call for better transferable NAS algorithms instead of vanilla strategy. We show the ranking of the performance of all architectures in different time stamps in Figure 5. The ranking based on the validation set (y axis) gradually converges to the ranking based on the final test accuracy (x axis). In this section, we evaluate 10 recent searching methods on our NAS-Bench-201, which can serve as baselines for future NAS algorithms in our dataset. Specifically, we evaluate some typical NAS algorithms: (I) Random Search algorithms, e.g., random search (RS) , random search with parameter sharing (RSPS) . (II) ES methods, e.g., REA. (III) RL algorithms, e.g., REINFORCE , ENAS . (IV) Differentiable algorithms. e.g., first order DARTS (DARTS-V1) , second order DARTS (DARTS-V2), GDAS (b), and SETN (a). (V) HPO methods, e.g., BOHB . We experimented all NAS algorithms on a single GeForce GTX 1080 Ti GPU. training on the CIFAR-10 train set and evaluating on its validation set; training on the CIFAR-10 train+validation sets and evaluating on its test set; training on the CIFAR-10 or ImageNet-16-120 train set and evaluating on their validation or test sets. "optimal" indicates the highest mean accuracy for each set. We report the mean and std of 500 runs for RS, REA, REINFORCE, and BOHB and of 3 runs for RSPS, DARTS, GDAS, SETN, and ENAS. Figure 6: We show of 500 runs for RS, REA, REINFORCE, and BOHB on CIFAR-10. The architecture is searched on CIFAR-10 and we report its validation accuracy (solid line) and test accuracy (dashed line) on three datasets. Each individual run is sorted by the validation accuracy of the searched architecture. We show the benefits for speed using our NAS-Bench-201 for different NAS algorithms in Table 4. For each NAS algorithm, once the searching procedure finished and the final architecture is found, our NAS-Bench-201 can directly return the performance of this architecture. With NAS-Bench-201, NAS algorithms without parameter sharing can significantly reduce the searching time into seconds. Notably, it still requires several GPU hours for NAS algorithms with parameter sharing to complete the searching. All algorithms use the training and validation set of CIFAR-10 to search architectures. In Table 5, Figure 6, Figure 7, and Figure 8, we report the performance of the searched architectures plus the optimal architecture on three datasets. We make the following observations: NAS methods without parameter sharing (REA, RS, REINFORCE, and BOHB) outperform others. This be because training a model for a few epochs with the converged LR scheduler (H ‡) can provide a good relative In Figure 7 and Figure 8, we show the performance of the architecture derived from each algorithm per searching epoch. DARTS-V1 will gradually over-fit to an architecture with all skip-connection operations. DARTS-V2 can alleviate this problem to some extent but will still over-fit after more epochs. It can further alleviate this problem by using batch statistics for BN layers. We train RSPS, GDAS, SETN, and ENAS five times longer than DARTS (250 epochs vs. 50 epochs). This is because at every iteration, RSPS, GDAS, SETN, and ENAS only optimize 1 |O|=5 parameters of the shared parameters, whereas DARTS optimize all shared parameters. The searched architecture performs similar for GDAS after 50 searching epochs. RSPS and SETN show a higher variance of the searched architecture compared to GDAS. Clarification. We have tried our best to implement each method. However, still, some algorithms might obtain non-optimal since their hyper-parameters might not fit our NAS-Bench-201. We empirically found that some NAS algorithms are sensitive to some hyper-parameters, whereas we try to compare them in a fair way as we can (Please see more explanation in Appendix). If researchers can provide better with different hyper-parameters, we are happy to update according to the new experimental . We also welcome more NAS algorithms to test on our dataset and would include them accordingly. How to avoid over-fitting on NAS-Bench-201? Our NAS-Bench-201 provides a benchmark for NAS algorithms, aiming to provide a fair and computational cost-friendly environment to the NAS community. The trained architecture and the easy-to-access performance of each architecture might provide some insidious ways for designing algorithms to over-fit the best architecture in our NASBench-201. Thus, we propose some rules which we wish the users will follow to achieve the original intention of NAS-Bench-201, a fair and efficient benchmark. 1. No regularization for a specific operation. Since the best architecture is known in our benchmark, specific designs to fit the structural attributes of the best performed architecture are insidious ways to fit our NAS-Bench-201. For example, as mentioned in Section 5, we found that the best architecture with the same amount of parameters for CIFAR10 on NAS-Bench-201 is ResNet. Restrictions on the number of residual connections is a way to over-fit the CIFAR10 benchmark. While this can give a good on this benchmark, the searching algorithm might not generalize to other benchmarks. 2. Use the provided performance. The training strategy affects the performance of the architecture. We suggest the users stick to the performance provided in our benchmark even if it is feasible to use other H to get a better performance. This provides a fair comparison with other algorithms. 3. Report of multiple searching runs. Since our benchmark can help to largely decrease the computational cost for a number of algorithms. Multiple searching runs give stable of the searching algorithm with acceptable time cost. Limitation regarding to hyper-parameter optimization (HPO). The performance of an architecture depends on the hyper-parameters H for its training and the optimal configuration of H may vary for different architectures. In NAS-Bench-201, we use the same configuration for all architectures, which may bring biases to the performance of some architectures. One related solution is HPO, which aims to search the optimal hyper-parameter configuration. However, searching the optimal hyper-parameter configurations and the architecture in one shot is too computationally expensive and still is an open problem. Potential designs using diagnostic information in NAS-Bench-201. As pointed in Section 2.4, different kinds of diagnostic information are provided. We hope that more insights about NAS could be found by analyzing these diagnostic information and further motivate potential solutions for NAS. For example, parameter sharing is the crucial technique to improve the searching efficiency, but the shared parameter would sacrifice the accuracy of each architecture. Could we find a better way to share parameters of each architecture from the learned 15,625 models' parameters? Generalization ability of the search space. It is important to test the generalization of observations on this dataset. An idea strategy is to do all benchmark experiments on a much larger search space. Unfortunately, it is prohibitive regarding the expensive computational cost. We bring some from and to provide some preliminary evidence of generalization. In Figure 2, we show the rankings of RS, REA, and REINFORCE is (REA > REINFORCE > RS). This is consistent with in NAS-Bench-101, which contains more architecture candidates. For NAS methods with parameter sharing, we find that GDAS ≥ DARTS ≥ ENAS, which is also consistent with in NAS-Bench-1SHOT1. Therefore, observations from our NAS-Bench-201 may generalize to other search spaces. In this paper, we introduce NAS-Bench-201 that extends the scope of reproducible NAS. In NASBench-201, almost any NAS algorithms can be directly evaluated. We train and evaluate 15,625 architecture on three different datasets, and we provide regarding different metrics. We comprehensively analyze our dataset and test some recent NAS algorithms on NAS-Bench-201 to serve as baselines for future works. In future, we will consider HPO and NAS together and much larger search space. We welcome researchers to try their NAS algorithms on our NAS-Bench-201 and would update the paper to include their . Table 6: We compare the correlation of different training strategies. The correlation coefficient between the validation accuracy after several training epochs on CIFAR-10 and the validation accuracy of full trained models on the CIFAR-10 training set, the test accuracy on CIFAR-10 trained with the training and validation sets, the validation/test accuracy on CIFAR-100 trained with the CIFAR-100 training set, the validation/test accuracy on ImageNet-16-120 trained with the ImageNet-16-120 training set. We use the validation accuracy after "EPOCHS" training epochs, where the the cosine annealing converged after "TOTAL" epochs. Number of unique architectures. In our NAS-Bench-201, we encode each architecture by a 6-dimensional vector. The i-th value in this vector indicates the operation in the i-th edge in a cell. Since we have 5 possible operations, there are 5 6 =15625 total unique models in this encoding. If we identify the isomorphic cell caused by the "skip-connect" operation, there are 12751 unique topology structures. If we identify the isomorphic cell caused by both "skip-connect" and "zeroize" operations, there are only 6466 unique topology structures. Note that, due to the numerical error, when given the same inputs, two architectures with the isomorphic cell might have different outputs. Note that, when we build our NAS-Bench-201, we train and evaluate every architecture without considering isomorphism. NAS-Bench-201 with bandit-based algorithms. Bandit-based algorithms, such as Hyperband and BOHB , usually train models with a short time budget. In our NAS-Bench-201, on CIFAR-10, we provide two options if you want to obtain the performance of a model trained with a short time budget: Results from H ‡, where the cosine annealing converged at the 12-th epoch. Results from H †, where the cosine annealing converged at the 200-th epoch. As shown in Table 6, the performance of these converged networks is much more likely to correlate highly with the performance after a larger number of iterations than just taking an earlier point of a single cosine annealing trajectory. Therefore, we choose the first option for all NAS algorithms that do not use parameter sharing. Based on the publicly available codes, we re-implement 10 NAS algorithms by ourselves to search architectures on our NAS-Bench-201. We provide the implementation details of each searching algorithm below. We consider the searching time of the first order DARTS as a baseline (about 12000 seconds on CIFAR-10). When evaluating RS, REINFORCE, ENAS, and BOHB, we set the total time budget as 12000 seconds for them. By default, for NAS algorithms with parameter sharing, we follow most hyper-parameters from DARTS and do not learn the scale and shift parameters for BN layers in each searching cell. We setup the searching procedure of RSPS, GDAS, SETN, ENAS five times longer than DARTS, because they optimize 1 5 of parameters but DARTS optimize all parameters per iteration. Most configurations can be found at https://github.com/D-X-Y/ AutoDL-Projects/tree/master/configs/nas-benchmark/algos. Random search (RS) . We randomly select architectures until the total 0 100 200 300 400 The index of runs The accuracy (%) sample-size= 3 sample-size= 5 sample-size=10 Figure 9: The effect of different sample sizes for REA on the CIFAR-10 validation set. training time plus the time of one evaluation procedure reaches the total budget. We use the validation accuracy after 12 training epochs (H ‡), which can be obtained directly in our NAS-Bench-201 as discussed in Section 2.4. The architecture with the highest validation accuracy is selected as the final searched architecture. Regularized evolution for image classifier architecture search (REA). We set the initial population size as 10, the number of cycles as infinity. The sample size is chosen as 10 from, according to Figure 9. We finish the algorithm once the simulated training time of the traversed architecture reaches the time budgets (12000 seconds). We use the validation accuracy after 12 training epochs (H ‡) as the fitness. ). We follow to use the REINFORCE algorithm as Figure 10: We evaluate the effect of different learning rates for REINFORCE, and report the CIFAR-10 validation accuracy of the searched architecture. a baseline RL method. We use an architecture encoding to parameterize each candidate in our search space as (; b . According to Figure 10, the learning date is set as. The momentum for exponential moving average of 0.9. We finish the training once the simulated training time reaches the time budgets (12000 seconds). The first order and second order DARTS (DARTS-V1 and DARTS-V2) . We train the shared parameters via Nesterov momentum SGD, using the cross-entropy loss for 50 epochs in total. We set weight decay as 0.0005 and momentum of 0.9. We decay the learning rate from 0.025 to 0.001 via cosine learning rate scheduler and clip the gradient by 5. We train the architecture encoding via Adam with the learning rate of 0.0003 and the weight decay of 0.001. We use the batch size of 64. The random horizontal flipping, random cropping with padding, and normalization are used for data augmentation. We choose these hyper-parameters following . Random search with parameter sharing (RSPS) . We train RSPS with the similar hyper-parameters as that of DARTS. Differently, we train the algorithm in 250 epochs in total. During each searching iteration, we randomly sample one architecture in each batch training. Each architecture uses the training mode for BN during training and the evaluation mode during evaluation . After training the shared parameters, we evaluate 100 randomly selected architectures with the shared parameters. For each architecture, we randomly choose one mini-batch with 256 validation samples to estimate the validation accuracy instead of using the whole validation set to calculate the precise validation accuracy. The one with the highest estimated validation accuracy will be selected. With the size of this mini-batch increasing, the more precise validation accuracy would be obtained and the better architecture would be selected. However, the searching costs will also be increased. We use the size of 256 to trade-off the accuracy and cost. Gradient-based search using differentiable architecture sampler (GDAS) (b). We use the most hyper-parameters as that of DARTS but train it for 250 epochs in total. The Gumbel-Softmax temperature is linearly decayed from 10 to 0.1. (a). We use the most hyperparameters as that of DARTS but train it for 250 epochs in total. After training the shared parameters, we select 100 architectures with the highest probabilities (encoded by the learned architecture en- Table 7 : The correlation between the probability or the one-shot validation accuracy (OSVA) and the ground truth accuracy on the CIFAR-10 validation set. "BN with Train" indicates that, during evaluation, the mean and variance of BN layers are calculated within each mini-batch. "BN with Eval" indicates that we accumulate mean and variance of BN layers in the training set and use these accumulated mean and variance for evaluation. We report the correlation as the average of 3 runs. coding). We evaluate these 100 selected architectures with the shared parameters. The evaluation procedure for these 100 architectures are the same as RSPS. ENAS . We use a two layer LSTM as the controller with the hidden size of 32. We use the temperature of 5 and the tanh constant of 2.5 for the sampling logits Following , we also add the the controller's sample entropy to the reward, weighted by 0.0001. We optimize the controller with Adam using the constant learning rate of 0.001. We optimize the network weights with SGD following the learning rate scheduler as the original paper and the batch size of 128. We did not impose any penalty to a specific operation. BOHB . We choose to use BOHB as an HPO algorithm on our NAS-Bench-201. We follow to set up the hyper-parameters for BOHB. We set the number of samples for the acquisition function to 4, the random fraction to 0%, the minimum-bandwidth to 0.3, the bandwidth factor to 3. We finish the algorithm once the simulated training time reaches the time budgets (12000 seconds). Parameter sharing becomes a common technique to improve the efficiency of differentiable neural architecture search methods (; b; a). The shared parameters are shared over millions of architecture candidates. It is almost impossible for the shared parameters to be optimal for all candidates. We hope to evaluate the trained shared parameters quantitatively. Specially, we use DARTS, GDAS, and SETN to optimize the shared parameters and the architecture encoding on CIFAR-10. For each architecture candidate, we can calculate its probability of being a good architecture from the architecture encoding following SETN (a). In addition, we can also evaluate a candidate using the shared parameters on the validation set to obtain "the one-shot validation accuracy". It is computationally expensive to evaluate all candidates on the whole validation set. To accelerate this procedure, we evaluate each architecture on a mini-batch with the size of 2048, and use the accuracy on this mini-batch to approximate "the one-shot validation accuracy". Ideally, the architecture ranking sorted by the probability or the one-shot validation accuracy should be similar to the ground truth ranking. We show the correlation between the proxy metric and the ground truth validation accuracy in Table 7. There are several observations: The correlation between the probability (encoded by the architecture encoding) and the ground truth accuracy is low. It suggests that the argmax-based deriving strategy can not secure a good architecture. It remains open on how to derive a good architecture after optimizing the shared parameters. The behavior of BN layers is important to one-shot validation accuracy. The accumulated mean and variance from the training set are harmful to one-shot accuracy. Instead, each architecture candidate should re-calculate the mean and variance of the BN layers. GDAS introduced Gumbel-softmax sampling when optimizing the architecture encoding. This strategy leads to a high correlation for the learned probability than that of DARTS. The uniform sampling strategy for training the shared parameters (a) can increase the correlation for one-shot accuracy compared to the strategy of the joint optimizing strategy (b;). In NAS-Bench-201 (version 1.0), every architecture is trained at least once. To be specific, 6219 architectures are trained once, 1621 architectures are trained twice, 7785 architectures are trained three times with different random seeds. Moreover, we are actively training all architectures with more seeds and will continue updating our NAS-Bench-201. | A NAS benchmark applicable to almost any NAS algorithms. | 1,256 | scitldr |
Generative Adversarial Networks (GANs) are one of the most popular tools for learning complex high dimensional distributions. However, generalization properties of GANs have not been well understood. In this paper, we analyze the generalization of GANs in practical settings. We show that discriminators trained on discrete datasets with the original GAN loss have poor generalization capability and do not approximate the theoretically optimal discriminator. We propose a zero-centered gradient penalty for improving the generalization of the discriminator by pushing it toward the optimal discriminator. The penalty guarantees the generalization and convergence of GANs. Experiments on synthetic and large scale datasets verify our theoretical analysis. GANs BID6 are one of the most popular tools for modeling high dimensional data. The original GAN is, however, highly unstable and often suffers from mode collapse. Much of recent researches has focused on improving the stability of GANs BID21 BID8 BID14 BID10. On the theoretical aspect, BID17 proved that gradient based training of the original GAN is locally stable. BID8 further proved that GANs trained with Two Timescale Update Rule (TTUR) converge to local equilibria. However, the generalization of GANs at local equilibria is not discussed in depth in these papers. BID2 showed that the generator can win by remembering a polynomial number of training examples. The implies that a low capacity discriminator cannot detect the lack of diversity. Therefore, it cannot teach the generator to approximate the target distribution. In section 4, we discuss the generalization capability of high capacity discriminators. We show that high capacity discriminators trained with the original GAN loss tends to overfit to the mislabeled samples in training dataset, guiding the generator toward collapsed equilibria (i.e. equilibria where the generator has mode collapse). BID3 proposed to measure the generalization capability of GAN by estimating the number of modes in the model distribution using the birthday paradox. Experiments on several datasets showed that the number of modes in the model distribution is several times greater than the number of training examples. The author concluded that although GANs might not be able to learn distributions, they do exhibit some level of generalization. Our analysis shows that poor generalization comes from the mismatch between discriminators trained on discrete finite datasets and the theoretically optimal discriminator. We propose a zero-centered gradient penalty for improving the generalization capability of (high capacity) discriminators. Our zero-centered gradient penalty pushes the discriminator toward the optimal one, making GAN to converge to equilibrium with good generalization capability. Our contributions are as follow:1. We show that discriminators trained with the original GAN loss have poor generalization capability. Poor generalization in the discriminator prevents the generator from learning the target distribution. TAB0 compares the key properties of our 0-GP with one centered GP (1-GP) BID7 and zero centered GP on real/fake samples only (0-GP-sample) BID13. p r the target distribution p g the model distribution p z the noise distribution d x the dimensionality of a data sample (real or fake) d z the dimensionality of a noise sample supp(p) the support of distribution p x ∼ p r a real sample z ∼ p z a noise vector drawn from the noise distribution p z y = G(z) a generated sample D r = {x 1, ..., x n} the set of n real samples D (t) g = y (t) 1,..., y (t) m the set of m generated samples at step t DISPLAYFORM0 the training dataset at step t Gradient penalties are widely used in GANs literature. There are a plethora of works on using gradient penalty to improve the stability of GANs BID13 BID7 BID19 BID22 BID20. However, these works mostly focused on making the training of GANs stable and convergent. Our work aims to improve the generalization capability of GANs via gradient regularization. BID3 showed that the number of modes in the model distribution grows linearly with the size of the discriminator. The implies that higher capacity discriminators are needed for better approximation of the target distribution. BID3 studied the tradeoff between generalization and discrimination in GANs. The authors showed that generalization is guaranteed if the discriminator set is small enough. In practice, rich discriminators are usually used for better discriminative power. Our GP makes rich discriminators generalizable while remaining discriminative. Although less mode collapse is not exactly the same as generalization, the ability to produce more diverse samples implies better generalization. There are a large number of papers on preventing mode collapse in GANs. BID21; BID23 introduced a number of empirical tricks to help stabilizing GANs. showed the importance of divergences in GAN training, leading to the introduction of Wasserstein GAN. The use of weak divergence is further explored by BID15; BID16. BID12 advocated the use of mixed-batches, mini-batches of real and fake data, GP Formula Improve generalization Convergence guarantee to smooth out the loss surface. The method exploits the distributional information in a mini-batch to prevent mode collapse. VEEGAN BID24 uses an inverse of the generator to map the data to the prior distribution. The mismatch between the inverse mapping and the prior is used to detect mode collapse. If the generator can remember the entire training set, then the inverse mapping can be arbitrarily close the the prior distribution. It suggests that VEEGAN might not be able to help GAN to generalize outside of the training dataset. Our method helps GANs to discover unseen regions of the target distribution, significantly improve the diversity of generated samples. DISPLAYFORM0 In the original GAN, the discriminator D maximizes the following objective BID6 showed that if the density functions p g and p r are known, then for a fixed generator G the optimal discriminator is DISPLAYFORM0 DISPLAYFORM1 In the beginning of the training, p g is very different from p r so we have p r (x) p g (x), for x ∈ D r and p g (y)p r (y), for y ∈ D g. Therefore, in the beginning of the training D * (x) ≈ 1, for x ∈ D r and D * (y) ≈ 0, for y ∈ D g. As the training progresses, the generator will bring p g closer to p r. The game reaches the global equilibrium when p r = p g. At the global equilibrium, DISPLAYFORM2 One important of the original paper is that, if the discriminator is optimal at every step of the GAN algorithm, then p g converges to p r.In practice, density functions are not known and the optimal discriminator is approximated by optimizing the classification performance of a parametric discriminator DISPLAYFORM3 We call a discriminator trained on a discrete finite dataset an empirical discriminator. The empirically optimal discriminator is denoted byD *. BID2 DISPLAYFORM4 A discriminator D defines a divergence between two distributions. The performance of a discriminator with good generalization capability on the training dataset should be similar to that on the entire data space. In practice, generalization capability of D can be estimated by measuring the difference between its performance on the training dataset and a held-out dataset. It has been observed that if the discriminator is too good at discriminating real and fake samples, the generator cannot learn effectively BID6. The phenomenon suggests thatD * does not well approximate D *, and does not guarantee the convergence of p g to p r. In the following, we clarify the mismatch betweenD * and D *, and its implications. g are disjoint with probability 1 even when p g and p r are exactly the same. D * perfectly classifies the real and the fake datasets, andD DISPLAYFORM0 g. The value ofD * on D (t) does not depend on the distance between the two distributions and does not reflect the learning progress. The value ofD * on the training dataset approximates that of D * in the beginning of the learning process but not when the two distributions are close. When trained using gradient descent on a discrete finite dataset with the loss in Eqn. 1, the discriminator D is pushed towardD *, not D *. This behavior does not depend on the size of training set (see FIG0, 1b), implying that the original GAN is not guaranteed to converge to the target distribution even when given enough data. When the generator gets better, generated samples are more similar to samples from the target distribution. However, regardless of their quality, generated samples are still labeled as fake in Eqn. 1. The training dataset D is a bad dataset as it contains many mislabeled examples. A discriminator trained on such dataset will overfit to the mislabeled examples and has poor generalization capability. It will misclassify unseen samples and cannot teach the generator to generate these samples. FIG0 and 1b demonstrate the problem on a synthetic dataset consisting of samples from two Gaussian distributions. The discriminator in FIG0 overfits to the small dataset and does not generalize to new samples in FIG0. Although the discriminator in FIG0 was trained on a larger dataset which is sufficient to characterize the two distributions, it still overfits to the data and its value surface is very different from that of the theoretically optimal discriminator in FIG0.An overfitted discriminator does not guide the model distribution toward target distribution but toward the real samples in the dataset. This explains why the original GAN usually exhibits mode collapse behavior. Finding the empirically optimal discriminator using gradient descent usually requires many iterations. Heuristically, overfitting can be alleviated by limiting the number of discriminator updates per generator update. BID6 recommended to update the discriminator once every generator update. In the next subsection, we show that limiting the number of discriminator updates per generator update prevents the discriminator from overfitting. * is costly to find and maintain. We consider here a weaker notion of optimality which can be achieved in practical settings. Definition 1 (-optimal discriminator). Given two disjoint datasets D r and D g, and a number > 0, a discriminator D is -optimal if DISPLAYFORM0 As observed in BID6,D * does not generate usable gradient for the generator. Goodfellow et al. proposed the non-saturating loss for the generator to circumvent this vanishing gradient problem. For an -optimal discriminator, if is relatively small, then the gradient of the discriminator w.r.t. fake datapoints might not vanish and can be used to guide the model distribution toward the target distribution. Proposition 2. Given two disjoint datasets D r and D g, and a number > 0, an -optimal discriminator D exists and can be constructed as a one hidden layer MLP with O(d x (m + n)) parameters. Proof. See appendix B.Because deep networks are more powerful than shallow ones, the size of a deep -optimal discriminator can be much smaller than O(d x (m + n)). From the formula, the size of a shallow -optimal discriminator for real world datasets ranges from a few to hundreds of millions parameters. That is comparable to the size of discriminators used in practice. showed that even when the generator can generate realistic samples, a discriminator that can perfectly classify real and fake samples can be found easily using gradient descent. The experiment verified that -optimal discriminator can be found using gradient descent in practical settings. We observe that the norm of the gradient w.r.t. the discriminator's parameters decreases as fakes samples approach real samples. If the discriminator's learning rate is fixed, then the number of gradient descent steps that the discriminator has to take to reach -optimal state should increase. Proposition 3. Alternating gradient descent with the same learning rate for discriminator and generator, and fixed number of discriminator updates per generator update (Fixed-Alt-GD) cannot maintain the (empirical) optimality of the discriminator. Fixed-Alt-GD decreases the discriminative power of the discriminator to improve its generalization capability. The proof for linear case is given in appendix C.In GANs trained with Two Timescale Update Rule (TTUR) BID8, the ratio between the learning rate of the discriminator and that of the generator goes to infinity as the iteration number goes to infinity. Therefore, the discriminator can learn much faster than the generator and might be able to maintain its optimality throughout the learning process. Let's consider a simplified scenario where the real and the fake datasets each contains a single datapoint: DISPLAYFORM0. Updating the generator according to the gradient from the discriminator will push y (t) toward x. The absolute value of directional derivative of DISPLAYFORM1 DISPLAYFORM2 The directional derivate of the -optimal discriminator explodes as the fake datapoint approaches the real datapoint. Directional derivative exploding implies gradient exploding at datapoints on the line segment connecting x and y (t). If in the next iteration, the generator produces a sample in a region where the gradient explodes, then the gradient w.r.t. the generator's parameters explodes. Let's consider the following line integral DISPLAYFORM3 where C is the line segment from y (t) to x. As the model distribution gets closer to the target distribution, the length of C should be non increasing. Therefore, maximizing D(x) − D(y (t) ), or the discriminative power of D, leads to the maximization of the directional derivative of D in the direction ds. The original GAN loss makes D to maximize its discriminative power, encouraging gradient exploding to occur. Gradient exploding happens in the discriminator trained with TTUR in FIG2. Because TTUR can help the discriminator to maintain its optimality, gradient exploding happens and persists throughout the training process. Without TTUR, the discriminator cannot maintain its optimality so gradient exploding can happen sometimes during the training but does not persist (FIG2 . Because of the saturated regions in the sigmoid function used in neural network based discriminators, the gradient w.r.t. datapoints in the training set could vanishes. However, gradient exploding must happen at some datapoints on the path between a pair of samples, where the sigmoid function does not saturate. In FIG0, gradient exploding happens near the decision boundary. In practice, D r and D g contain many datapoints and the generator is updated using the average of gradients of the discriminator w.r.t. fake datapoints in the mini-batch. If a fake datapoint y 0 is very close to a real datapoint x 0, the gradient (∇D) y0 might explode. When the average gradient is computed over the mini-batch, (∇D) y0 outweighs other gradients. The generator updated with this average gradient will move many fake datapoints in the direction of (∇D) y0, toward x 0, making mode collapse visible. Although the theoretically optimal discriminator D * is generalizable, the original GAN loss does not push empirical discriminators toward D *. We aim to improve the generalization capability of empirical discriminators by pushing them toward D *. For any input v ∈ supp(p r) ∪ supp(p g), the value of D * (v) goes to 1 2 and the gradient (∇D) v goes to 0 as p g approaches p r. Consider again the line integral in Eqn. 4. As D * (x) and D * (y) approach 1 2 for all x ∈ supp(p r) and y ∈ supp(p g), we have DISPLAYFORM0 for all pairs of x and y and all paths C from y to x. That means, the discriminative power of D * must decrease as the two distributions become more similar. To push an empirical discriminator D toward D *, we force D to satisfy two requirements: DISPLAYFORM1 The first requirement can be implemented by sampling some datapoints v ∈ supp(p r) ∪ supp(p g) and force (∇D) v to be 0. The second requirement can be implemented by sampling pairs of real and fake datapoints (x, y) and force D(x) − D(y) to be 0. The two requirements can be added to the discriminator's objective as followŝ DISPLAYFORM0 where L is the objective in Eqn. 1. However, as discussed in section 4.2.2, an -optimal discriminator can have zero gradient on the training dataset and have gradient exploding outside of the training dataset. The gradient norm could go to infinity even when D(x) − D(y) is small. Regulating the difference between D(x) and D(y) is not an efficient way to prevent gradient exploding. We want to prevent gradient exploding on every path in supp(p r) ∪ supp(p g). Because (∇D *) v → 0 for all v ∈ supp(p r) ∪ supp(p g) as p g approach p r, we could push the gradient w.r.t. every datapoint on every path C ∈ supp(p r) ∪ supp(p g) toward 0. We note that, if (∇D) v → 0, ∀ v ∈ C then C (∇D) v · ds → 0. Therefore, the two requirements can be enforced by a single zero-centered gradient penalty of the form DISPLAYFORM1 The remaining problem is how to find the path C from a fake to a real sample which lies inside supp(p r) ∪ supp(p g). Because we do not have access to the full supports of p r and p g, and the supports of two distributions could be disjoint in the beginning of the training process, finding a path which lies completely inside the support is infeasible. In the current implementation, we approximate C with the straight line connecting a pair of samples, although there is no guarantee that all datapoints on that straight line are in supp(p r) ∪ supp(p g).That in the following objective DISPLAYFORM2 wherex = αx + (1 − α)y, x ∼ p r, y ∼ p g, and α ∼ U 1. We describe a more sophisticated way of finding a better path in appendix F.The larger λ is, the stronger (∇D)x is pushed toward 0. If λ is 0, then the discriminator will only focus on maximizing its discriminative power. If λ approaches infinity, then the discriminator has maximum generalization capability and no discriminative power. λ controls the tradeoff between discrimination and generalization in the discriminator. BID13 proposed to force the gradient w.r.t. datapoints in the real and/or fake dataset(s) to be 0 to make the training of GANs convergent. In section 4, we showed that for discrete training dataset, an empirically optimal discriminatorD * always exists and could be found by gradient descent. Although (∇D *) v = 0, ∀ v ∈ D,D * does not satisfy the requirement in Eqn. 5 and have gradient exploding when some fake datapoints approach a real datapoint. The discriminators in FIG0, 2c and 2d have vanishingly small gradients on datapoints in the training dataset and very large gradients outside. They have poor generalization capability and cannot teach the generator to generate unseen real datapoints. Therefore, zero-centered gradient penalty on samples from p r and p g only cannot help improving the generalization of the discriminator. Non-zero centered GPs do not push an empirical discriminator toward D * because the gradient does not converge to 0. A commonly used non-zero centered GP is the one-centered GP (1-GP) BID7 which has the following form DISPLAYFORM0 wherex = αx + (1 − α)y, x ∼ p r, y ∼ p g, and α ∼ U. Although the initial goal of 1-GP was to enforce Lipschitz constraint on the discriminator 2, BID5 found that 1-GP prevents gradient exploding, making the original GAN more stable. 1-GP forces the norm of gradients w.r.t. datapoints on the line segment connecting x and y to be 1. If all gradients on the line segment have norm 1, then the line integral in Eqn. 4 could be as large as x − y. Because the distance between random samples grows with the dimensionality, in high dimensional space x − y is greater than 1 with high probability. The discriminator could maximize the value of the line integral without violating the Lipschitz constraint. The discriminator trained with 1-GP, therefore, can overfit to the training data and have poor generalization capability. BID13 showed that zero-centered GP on real and/or fake samples (0-GP-sample) makes GANs convergent. The penalty is based on the convergence analysis for the Dirac GAN, an 1-dimensional linear GAN which learns the Dirac distribution. The intuition is that when p g is the same as p r, the gradient of the discriminator w.r.t. the fake datapoints (which are also real datapoints) should be 0 so that generator will not move away when being updated using this gradient. If the gradient from the discriminator is not 0, then the generator will oscillate around the equilibrium. Our GP forces the gradient w.r.t. all datapoints on the line segment between a pair of samples (including the two endpoints) to be 0. As a , our GP also prevents the generator from oscillating. Therefore, our GP has the same convergence guarantee as the 0-GP-sample. Discriminators trained with the original GAN loss tends to focus on the region of the where fake samples are close to real samples, ignoring other regions. The phenomenon can be seen in FIG2, 2c, 2d, 2h and 2i. Gradients in the region where fake samples are concentrated are large while gradients in other regions, including regions where real samples are located, are very small. The generator cannot discover and generate real datapoints in regions where the gradient vanishes. When trained with the objective in Eqn. 6, the discriminator will have to balance between maximizing L and minimizing the GP. For finite λ, the GP term will not be exactly 0. Let DISPLAYFORM0. Among discriminators with the same value of γ, gradient descent will find the discriminator that maximizes L. As discussed in section 4.2.2, maximizing L leads to the maximization of norms of gradients on the path from y to x. The discriminator should maximize the value η = Ex[(∇D)x ]. If γ is fixed then η is maximized when ∇Dx(i) = ∇Dx(j), ∀ i, j (Cauchy-Schwarz inequality). Therefore, our zero-centered GP encourages the gradients at different regions of the real data space to have the same norm. The capacity of D is distributed more equally between regions of the real data space, effectively reduce mode collapse. The effect can be seen in FIG2 1-GP encourages | ∇Dx(i) − 1| = | ∇Dx(j) − 1|, ∀ i, j. That allows gradient norms to be smaller than 1 in some regions and larger than 1 in some other regions. The problem can be seen in FIG2. The code is made available at https://github.com/htt210/ GeneralizationAndStabilityInGANs. To test the effectiveness of gradient penalties in preventing overfitting, we designed a dataset with real and fake samples coming from two Gaussian distributions and trained a MLP based discriminator on that dataset. The is shown in FIG0. As predicted in section 5.3, 0-GP-sample does not help to improve generalization. 1-GP helps to improve generalization. The value surface in FIG0 is smoother than that in FIG0. However, as discussed in section 5.3, 1-GP cannot help much in higher dimensional space where the pair-wise distances are large. The discriminator trained with our 0-GP has the best generalization capability, with a value surface which is the most similar to that of the theoretically optimal one. We increased the number of discriminator updates per generator update to 5 to see the effect of GPs in preventing overfitting. On the MNIST dataset, GAN without GP and with other GPs cannot learn anything after 10,000 iterations. GAN with our 0-GP can still learn normally and start produce recognizable digits after only 1,000 iterations. The confirms that our GP is effective in preventing overfitting in the discriminator. OF GANS SYNTHETIC DATAWe tested different gradient penalties on a number of synthetic datasets to compare their effectiveness. The first dataset is a mixture of 8 Gaussians. The dataset is scaled up by a factor of 10 to simulate the situation in high dimensional space where random samples are far from each other. The is shown in FIG2. GANs with other gradient penalties all fail to learn the distribution and exhibit mode collapse problem to different extents. GAN with our 0-GP (GAN-0-GP) can successfully learn the distribution. Furthermore, GAN-0-GP can generate datapoints on the circle, demonstrating good generalization capability. The original GAN collapses to some disconnected modes and cannot perform smooth interpolation between modes: small change in the input in large, unpredictable change in the output. GAN with zero-centered GP on real/fake samples only also exhibits the same "mode jumping" behavior. The behavior suggests that these GANs tend to remember the training dataset and have poor generalization capability. Fig. 9 in appendix D demonstrates the problem on MNIST dataset. We observe that GAN-0-GP behaves similar to Wasserstein GAN as it first learns the overall structure of the distribution and then focuses on the modes. An evolution sequence of GAN-0-GP is shown in FIG4 in appendix D. Results on other synthetic datasets are shown in appendix D. The on MNIST dataset is shown in Fig. 3. After 1,000 iterations, all other GANs exhibit mode collapse or cannot learn anything. GAN-0-GP is robust to changes in hyper parameters such BID23 on ImageNet of GAN-0-GP, GAN-0-GP-sample, and WGAN-GP. The code for this experiment is adapted from BID13. We used λ = 10 for all GANs as recommended by Mescheder et al. The critic in WGAN-GP was updated 5 times per generator update. To improve convergence, we used TTUR with learning rates of 0.0001 and 0.0003 for the generator and discriminator, respectively. as learning rate and optimizers. When Adam is initialized with large β 1, e.g. 0.9, GANs with other GPs cannot learn anything after many iterations. More samples are given in appendix D. DISPLAYFORM0 We observe that higher value of λ improves the diversity of generated samples. For λ = 50, we observe some similar looking samples in the generated data. This is consistent with our conjecture that larger λ leads to better generalization. When trained on ImangeNet BID4 ), GAN-0-GP can produce high quality samples from all 1,000 classes. We compared our method with GAN with 0-GP-sample and WGAN-GP. GAN-0-GP-sample is able to produce samples of state of the art quality without using progressive growing trick BID10. The in Fig. 4 shows that our method consistently outperforms GAN-0-GP-sample. GAN-0-GP and GAN-0-GP-sample outperform WGAN-GP by a large margin. Image samples are given in appendix D. In this paper, we clarify the reason behind the poor generalization capability of GAN. We show that the original GAN loss does not guide the discriminator and the generator toward a generalizable equilibrium. We propose a zero-centered gradient penalty which pushes empirical discriminators toward the optimal discriminator with good generalization capability. Our gradient penalty provides better generalization and convergence guarantee than other gradient penalties. Experiments on diverse datasets verify that our method significantly improves the generalization and stability of GANs. Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, and Xiaodong He. On the discriminationgeneralization tradeoff in GANs. In International Conference on.A PROOF FOR PROPOSITION 1For continuous random variable V, P(V = v) = 0 for any v. The probability of finding a noise vector z such that G(z) is exactly equal to a real datapoint x ∈ D r via random sampling is 0. Therefore, the probability of a real datapoint x i being in the fake dataset D g is 0. Similarly, the probability of any fake datapoint being in the real dataset is 0. DISPLAYFORM0 Furthermore, due to the curse of dimensionality, the probability of sampling a datapoint which is close to another datapoint in high dimensional space also decrease exponentially. The distances between datapoints are larger in higher dimensional space. That suggests that it is easier to separate D r and D (t) g in higher dimensional space. To make the construction process simpler, let's assume that samples are normalized: DISPLAYFORM0 Let's use the following new notations for real and fake samples: DISPLAYFORM1 We construct the -optimal discriminator D as a MLP with 1 hidden layer. Let W 1 ∈ R (m+n)×dx and W 2 ∈ R m+n be the weight matrices of D. The total number of parameters in DISPLAYFORM2 We set the value of W 1 as DISPLAYFORM3 and W 2 as DISPLAYFORM4 Given an input v ∈ D, the output is computed as: DISPLAYFORM5 where σ is the softmax function. Let a = W 1 v, we have DISPLAYFORM6 As k → ∞, σ(W 1 v i) becomes a one-hot vector with the i-th element being 1, all other elements being 0. Thus, for large enough k, for any v j ∈ D, the output of the network is DISPLAYFORM7 C FIXED-ALT-GD CANNOT MAINTAIN THE OPTIMALITY OF -DISCRIMINATORS Let's consider the case where the real and the fake dataset each contain a single datapoint D r = {x}, D DISPLAYFORM8, and the discriminator and the generator are linear: DISPLAYFORM9 and the objective is also linear (Wasserstein GAN's objective): DISPLAYFORM10 The same learning rate α is used for D and G.At step t, the discriminator is -optimal DISPLAYFORM11 The gradients w.r.t. θ D and θ G are DISPLAYFORM12 DISPLAYFORM13 If the learning rate α is small enough, x − y (t) should decrease as t increases. As the empirical fake distribution converges to the empirical real distribution, x − y (t) → 0. The norm of gradient w.r.t. θ D, therefore, decreases as t increases and vanishes when the two empirical distributions are the same. From Eqn. 10, we see that, in order to maintain D's -optimality when x − y (t) decreases, θ D has to increase. From Eqn. 10 and 12, we see that the gradient w.r.t. θ G grows as the two empirical distributions are more similar. As x − y (t) → 0, DISPLAYFORM14 Because the same learning rate α is used for both G and D, G will learn much faster than D. Furthermore, because x − y (t) decreases as t increases, the difference DISPLAYFORM15 increases with t. The number of gradient steps that D has to take to reach the next -optimal state increases, and goes to infinity as x − y (t) → 0. Therefore, gradient descent with fixed number of updates to θ D cannot maintain the optimality of D.The derivation for the objective in Eqn. 1 is similar.: GANs trained with different gradient penalty on swissroll dataset. Although GAN-1-GP is able to learn the distribution, the gradient field has bad pattern. GAN-1-GP is more sensitive to change in hyper parameters and optimizers. GAN-1-GP fails to learn the scaled up version of the distribution. The entire ImageNet dataset with all 1000 classes was used in the experiment. Because of our hardware limits, we used images of size 64 × 64. We used the code from BID13, available at https://github.com/LMescheder/GAN stability, for our experiment. Generator and Discriminator are ResNets, each contains 5 residual blocks. All GANs in our experiment have the same architectures and hyper parameters. The configuration for WGAN-GP5 is as follows. | We propose a zero-centered gradient penalty for improving generalization and stability of GANs | 1,257 | scitldr |
Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant amount of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of ``tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs ed in a plethora of proposed losses, regularization and normalization schemes, and neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We reproduce the current state of the art and go beyond fairly exploring the GAN landscape. We discuss common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub. Deep generative models are a powerful class of unsupervised machine learning models. The power of these models was recently harnessed in a variety of applications, including image generation, learned compression, and domain transfer BID13; BID0 BID0. Generative adversarial networks BID8 are one of the main approaches to learning such models in a fully unsupervised fashion. The GAN framework can be viewed as a two-player game where the first player, the generator, is learning to transform some simple input distribution (usually a standard multivariate Normal or uniform) to a distribution on the space of images, such that the second player, the discriminator, cannot tell whether the samples belong to the true distribution or were synthesized. Both players aim to minimize their own loss and the solution to the game is the Nash equilibrium where neither player can improve their loss unilaterally. This powerful framework can also be derived by minimizing a divergence between the model distribution and the true distribution BID20.Training GANs involves solving a minimax problem over the parameters of the generator and the discriminator which are usually parameterized as deep convolutional neural networks. Consequently, this minimax problem is notoriously hard to solve in practice. As a , a plethora of loss functions, regularization and normalization schemes, coupled with neural architecture choices, have been proposed BID8; BID19 BID9 BID18.Our contributions. In this work we provide a thorough empirical analysis of these competing approaches, and help the researchers and practitioners navigate this space. We first define the GAN landscape -the set of loss functions, normalization and regularization schemes, and the most commonly used architectures. We explore this search space on several modern large-scale data sets by means of hyperparameter optimization, considering both "good" sets of hyperparameters reported in the literature, as well as ones obtained by Gaussian Process regression. By analyzing the impact of the loss function, we conclude that the non-saturating loss is sufficiently stable across data sets, architectures and hyperparameters. We then proceed to decompose the effect of various normalization and regularization schemes, as well as varying architectures. We show that both gradient penalty BID9 as well as spectral normalization BID19 are useful in the context of high-capacity architectures. Finally, we discuss some common pitfalls, reproducibility issues, and practical considerations. We provide reference implementations, including training and evaluation code on Github 1 and provide pre-trained models on TensorFlow Hub. 2 2 THE GAN LANDSCAPE Let P denote the target (true) distribution and Q the model distribution. BID8 suggest two loss functions: the minimax GAN and the non-saturating (NS) GAN. In the former the discriminator minimizes the negative log-likelihood for the binary classification task. In the latter the generator maximizes the probability of generated samples being real. In this work we consider the non-saturating loss as it is known to outperform the minimax variant. The corresponding loss functions are DISPLAYFORM0 In Wasserstein GAN (WGAN) ) the authors propose to consider the Wasserstein divergence instead of the original Jensen-Shannon (JS). In particular, under the optimal discriminator, minimizing the proposed value function with respect to the generator minimizes the Wasserstein distance between P and Q. The drawback is that one has to ensure a 1-Lipschitz discriminator due to exploited Kantorovich-Rubenstein duality. The corresponding loss functions are DISPLAYFORM1 Finally, we consider the least-squares loss (LS) which corresponds to minimizing the Pearson χ 2 divergence between P and Q BID18. The intuition is that this loss function is smooth and saturates slower than the sigmoid cross-entropy loss of the JS formulation. The corresponding loss functions are DISPLAYFORM2 Gradient norm penalty. In the context of Wasserstein GANs this penalty can be interpreted as a soft penalty for the violation of 1-Lipschitzness (WGAN GP) BID9. Hereby, the gradient is evaluated on a linear interpolation between training points and generated samples as a proxy to the optimal coupling. The gradient penalty can also be evaluated around the data manifold which encourages the discriminator to be piece-wise linear in that region (Dragan) BID16. However, the gradient norm penalty can be considered purely as a regularizer for the discriminator and it was shown that it can improve the performance for other losses BID7. Furthermore, the penalty can be scaled by the "confidence" of the discriminator in the context of f-divergences . A drawback of gradient penalty (GP) regularization scheme is that it can depend on the model distribution Q which changes during training. One drawback of Dragan is that it is unclear to which extent the Gaussian assumption for the manifold holds. Finally, computing the gradient norms implies a non-trivial running time penalty -essentially doubling the running time. We also investigate the impact of a regularizer ubiquitous in supervised learning -the L 2 penalty on all the weights of the network. Discriminator normalization. Normalizing the discriminator can be useful from both the optimization perspective (more efficient gradient flow, a more stable optimization), as well as from the representation perspective -the representation richness of the layers in a neural network depends on the spectral structure of the corresponding weight matrices BID19.From the optimization point of view, several techniques have found their way into the GAN literature, namely batch normalization (BN) BID12 and layer normalization (LN) BID2. Batch normalization in the context of GANs was suggested by BID6 and further popularized by. It normalizes the pre-activations of nodes in a layer to mean β and standard deviation γ, where both β and γ are parameters learned for each node in the layer. The normalization is done on the batch level and for each node separately. In contrast, with Layer normalization, all the hidden units in a layer share the same normalization terms β and γ, but different samples are normalized differently BID2. Layer normalization was first applied in the context of GANs in BID9.From the representation point of view, one has to consider the neural network as a composition of (possibly non-linear) mappings and analyze their spectral properties. In particular, for the discriminator to be a bounded linear operator it suffices to control the maximum singular value. This approach is followed in BID19 where the authors suggest dividing each weight matrix, including the matrices representing convolutional kernels, by their spectral norm. Furthermore, the authors argue that a key advantage of spectral normalization over competing approaches is that it in discriminators of higher rank. We explore two classes of architectures in this study: deep convolutional generative adversarial networks (DCGAN) and residual networks (ResNet) BID10, both of which are ubiquitous in GAN research. Recently, BID19 defined a variation of DCGAN, so called SNDCGAN. Apart from minor updates (cf. Section 4) the main difference to DCGAN is the use of an eight-layer discriminator network. The details of both networks are summarized in TAB3. The other architecture, ResNet19, is an architecture with five ResNet blocks in the generator and six ResNet blocks in the discriminator, that can operate on 128 × 128 images. We follow the ResNet setup from BID19, with the small difference that we simplified the design of the discriminator. The detailed parameters of discriminator and generator are summarized in TAB4 and TAB4. With this setup we were able to reproduce the current state of the art . An ablation study on various ResNet modifications is available in the Appendix. We focus on several recently proposed metrics well suited to the image domain. For an in-depth overview of quantitative metrics we refer the reader to BID4.Inception Score (IS). Proposed by , IS offers a way to quantitatively evaluate the quality of generated samples. Intuitively, the conditional label distribution of samples containing meaningful objects should have low entropy, and the variability of the samples should be high. which can be expressed as DISPLAYFORM0 The authors found that this score is well-correlated with scores from human annotators. Drawbacks include insensitivity to the prior distribution over labels and not being a proper distance. As an alternative BID11 proposed the Frechet Inception Distance (FID). Samples from P and Q are first embedded into a feature space (a specific layer of InceptionNet). Then, assuming that the embedded data follows a multivariate Gaussian distribution, the mean and covariance are estimated. Finally, the Fréchet distance between these two Gaussians is computed, i.e. DISPLAYFORM1 where (µ x, Σ x), and (µ y, Σ y) are the mean and covariance of the embedded samples from P and Q, respectively. The authors argue that FID is consistent with human judgment and more robust to noise than IS. Furthermore, the score is sensitive to the visual quality of generated samplesintroducing noise or artifacts in the generated samples will reduce the FID. In contrast to IS, FID can detect intra-class mode dropping, i.e. a model that generates only one image per class can score a perfect IS, but will suffer from have a high FID BID17. BID3 argued that FID has no unbiased estimator and suggest Kernel Inception distance (KID) instead. In Appendix B we empirically compare KID to FID and observe that both metrics are very strongly correlated (Spearman rank-order correlation coefficient of 0.994 for LSUN-BEDROOM and 0.995 for CELEBA-HQ-128 datasets). As a we focus on FID as it is likely to in the same ranking. Multi-scale Structural Similarity for Image Quality (MS-SSIM) and Diversity. A critical issue in GANs are mode collapse and mode-dropping -failing to capture a mode, or low-diversity of generated samples from a given mode. The MS-SSIM score is used for measuring the similarity of two images where higher MS-SSIM score indicates more similar images. Several recent works suggest using the average pairwise MS-SSIM score within a given class as a proxy for the diversity of generated samples BID21 BID7. The drawback of this [10 DISPLAYFORM2 approach is that we do not know the class corresponding to the generated sample, so it is usually applied on one-class data sets, such as CELEBA-HQ-128. In this work we use the same setup as in BID7 . In particular, given a batch size b, we compute the average pairwise MS-SSIM score on 5 batches, of 5 × b × (b − 1)/2 image pairs in total. We stress that the diversity should only be taken into account together with the FID and IS metrics. We consider three data sets, namely CIFAR10, CELEBA-HQ-128, and LSUN-BEDROOM. The LSUN-BEDROOM data set contains slightly more than 3 million images 3. We randomly partition the images into a train and test set whereby we use 30588 images as the test set. Secondly, we use the CELEBA-HQ data set of 30k images BID14. We use the 128 × 128 × 3 version obtained by running the code provided by the authors. 4 We use 3000 examples as the test set and the remaining examples as the training set. Finally, we also include the CIFAR10 data set which contains 70K images (32x32x3), partitioned into 60000 training instances and 10000 testing instances. The baseline FID scores are 12.6 for CELEBA-HQ-128, 3.8 for LSUN-BEDROOM, and 5.19 for CIFAR10. Details on FID computation are presented in Section 4. The search space for GANs is prohibitively expensive: exploring all combinations of all losses, normalization and regularization schemes, and architectures is outside of the practical realm. Instead, in this study we analyze several slices of this tensor for each data set. In particular, to ensure that we can reproduce existing , we perform a study over the subset of this tensor on CIFAR10. We then proceed to analyze the performance of these models across CELEBA-HQ-128 and LSUN-BEDROOM. In Section 3.1 we fix everything but the loss. In Section 3.2 we fix everything but the regularization and normalization scheme. Finally, in Section 3.3 we fix everything but the architecture. This allows us to decouple some of these design choices and provide some insight on what matters most. As noted in BID17, one major issue preventing further progress is the hyperparameter tuning -currently, the community has converged to a small set of parameter values which work on some data sets, and may completely fail on others. In this study we combine the best hyperparameter settings found in the literature BID19, and perform Gaussian Process regression in the bandit setting to possibly uncover better hyperparameter settings. We then consider the top performing models and discuss the impact of the computational budget. We summarize the fixed hyperparameter settings in TAB0 which contains the "good" parameters reported in recent publications BID7 BID19 BID9. In particular, we consider the cross product of these parameters to obtain 24 hyperparameter settings to reduce the bias. Finally, to provide a fair comparison, we perform Gaussian Process optimization in the bandit setting on the parameter ranges provided in TAB0. We run 12 rounds (i.e. we communicate with the oracle 12 times) of the optimization, each with a batch of 10 hyperparameter sets selected based on the FID scores from the of the previous iterations. Figure 1: Impact of the loss function: FID distribution for top 5% models. The non-saturating (NS) loss is stable over both data sets. Gradient penalty and spectral normalization improve the sample quality. From the computational budget perspective (i.e. how many models one needs to train to reach a certain FID), both spectral normalization and gradient penalty perform better than the baseline, but the former is more efficient. As we explore the number of discriminator updates per generator update (1 or 5), this leads to an additional 240 hyperparameter settings which in some cases outperform the previously known hyperparameter settings. Batch size is set to 64 for all the experiments. We use a fixed the number of discriminator update steps of 100K for LSUN-BEDROOM data set and CELEBA-HQ-128 data set, and 200K for CIFAR10 data set. We apply the Adam optimizer BID15. Given that there are 4 major components (loss, architecture, regularization, normalization) to analyze for each data set, it is infeasible to explore the whole landscape. Hence, we opt for a more pragmatic solution -we keep some dimensions fixed, and vary the others. For each experiment we highlight three aspects: FID distribution of the top 5% of the trained models, the corresponding sample diversity score, and the tradeoff between the computational budget (i.e. number of models to train) and model quality in terms of FID. Each model was retrained 5 times with a different random seed and we report the median score. The variance for models obtained by Gaussian Process regression is handled implicitly so we train each model once. Here the loss is either the non-saturating loss (NS) BID8, the least-squares loss (LS) BID18, or the Wasserstein loss (WGAN). We use the ResNet19 with generator and discriminator architectures detailed in TAB4. We consider the most prominent normalization and regularization approaches: gradient penalty BID9, and spectral normalization BID19. Both studies were performed on CELEBA-HQ-128 and LSUN-BEDROOM with hyperparameter settings shown in TAB0.The are presented in Figure 1. We observe that the non-saturating loss is stable over both data sets. Spectral normalization improves the quality of the model on both data sets. Similarly, the gradient penalty can help improve the quality of the model, but finding a good regularization tradeoff is non-trivial and requires a high computational budget. Models using the GP penalty benefit from 5:1 ratio of discriminator to generator updates as suggested by BID9. We also performed a study on hinge loss BID19 and present it in the Appendix. Figure 2: Impact of regularization and normalization: FID distribution for top 5% models. Both gradient penalty (GP) and spectral normalization (SN) outperform the baseline and should be considered, while former being more computationally expensive. Unfortunately none fully address the stability issues. DISPLAYFORM0 The goal of this study is to compare the relative performance of various regularization and normalization methods presented in the literature. To this end, and based on the loss study, we fix the loss to non-saturating loss BID8. We use the ResNet19 with generator and discriminator architectures described in TAB4. Finally, we consider batch normalization (BN) BID12, layer normalization (LN) BID2, spectral normalization (SN), gradient penalty (GP) BID9, dragan penalty (DR) BID16, or L 2 regularization. We consider both CELEBA-HQ-128 and LSUN-BEDROOM with the hyperparameter settings shown in TAB0.The are presented in Figure 2. We observe that adding batch norm to the discriminator hurts the performance. Secondly, gradient penalty can help, but it doesn't stabilize the training. In fact, it is non-trivial to strike a balance of the loss and regularization strength. Spectral normalization helps improve the model quality and is more computationally efficient than gradient penalty. This is consistent with recent in. Similarly to the loss study, models using GP penalty benefit from 5:1 ratio of discriminator to generator updates. Furthermore, in a separate ablation study we observed that running the optimization procedure for an additional 100K steps is likely to increase the performance of the models with GP penalty. Impact of Simultaneous Regularization and Normalization. Given the folklore that the Lipschitz constant of the discriminator is critical for the performance, one may expect simultaneous Test set Noisy test set Figure 3: Impact of simultaneous normalization and regularization: FID distribution for top 5% models. Gradient penalty coupled with spectral normalization (SN) or layer normalization (LN) strongly improves the performance over the baseline. DISPLAYFORM0 regularization and normalization could improve model quality. To quantify this effect, we fix the loss to non-saturating loss BID8, use the Resnet19 architecture (as above), and combine several normalization and regularization schemes, with hyperparameter settings shown in TAB0 coupled with 24 randomly selected parameters. The are presented in Figure 3. We observe that one may benefit from additional regularization and normalization. However, a lot of computational effort has to be invested for somewhat marginal gains in FID. Nevertheless, given enough computational budget we advocate simultaneous regularization and normalization -spectral normalization and layer normalization seem to perform well in practice. An interesting practical question is whether our findings also hold for a different model capacity. To this end, we also perform a study on SNDCGAN from BID19. We consider the non-saturating GAN loss, gradient penalty and spectral normalization. While for smaller architectures regularization is not essential BID17, the regularization and normalization effects might become more relevant due to deeper architectures and optimization considerations. The are presented in FIG4. We observe that both architectures achieve comparable and benefit from regularization and normalization. Spectral normalization strongly outperforms the baseline for both architectures. In this section we focus on several pitfalls we encountered while trying to reproduce existing and provide a fairly and accurate comparison. Metrics. There already seems to be a divergence in how the FID score is computed: Some authors report the score on training data, yielding a FID between 50k training and 50k generated samples . Some opt to report the FID based on 10k test samples and 5k generated samples and use a custom implementation BID19. Finally, BID17 report the score with respect to the test data, in particular FID between 10k test samples, and 10k generated samples. The subtle differences will in a mismatch between the reported FIDs, in some cases of more than 10%. We argue that FID should be computed with respect to the test data set as and use 10k test samples and 10k generated samples on CIFAR10 and LSUN-BEDROOM, and 3k vs 3k on CELEBA-HQ-128 as in in BID17. Similarly, there are several ways to compute a diversity score using MS-SSIM and we follow the approach from BID7. We provide the implementation details in Section G of the Appendix. Details of neural architectures. Even in popular architectures, like ResNet, there is still a number of design decision one needs to make, that are often omitted from the reported . Those include the exact design of the ResNet cell (order of layers, when is ReLu applied, when to upsample and downsample, how many filters to use). Some of these differences might lead to potentially unfair comparison. As a , we suggest to use the architectures presented within this work as a solid baseline. An ablation study on various ResNet modifications is available in the Appendix. Data sets. A common issue is related to data set processing -does LSUN-BEDROOM always correspond to the same data set? In most cases the precise algorithm for upscaling or cropping is not clear which introduces inconsistencies between on the "same" data set. Implementation details and non-determinism. One major issue is the mismatch between the algorithm presented in a paper and the code provided online. We are aware that there is an embarrassingly large gap between a good implementation and a bad implementation of a given model. Hence, when no code is available, one is forced to guess which modifications were done. Another particularly tricky issue is removing randomness from the training process. After one fixes the data ordering and the initial weights, obtaining the same score by training the same model twice is non-trivial due to randomness present in certain GPU operations BID5. Disabling the optimizations causing the non-determinism often in an order of magnitude running time penalty. While each of these issues taken in isolation seems minor, they compound to create a mist which introduces friction in practical applications and the research process . A recent large-scale study on GANs and Variational Autoencoders was presented in BID17. The authors consider several loss functions and regularizers, and study the effect of the loss function on the FID score, with low-to-medium complexity data sets (MNIST, CIFAR10, CELEBA), and a single (InfoGAN style) architecture. In this limited setting, the authors found that there is no statistically significant difference between recently introduced models and the original non-saturating GAN. A study of the effects of gradient-norm regularization in GANs was recently presented in BID7. The authors posit that the gradient penalty can also be applied to the non-saturating GAN, and that, to a limited extent, it reduces the sensitivity to hyperparameter selection. In a recent work on spectral normalization, the authors perform a small study of the competing regularization and normalization approaches BID19. We are happy to report that we could reproduce these and we present them in the Appendix. Inspired by these works and building on the available open-source code from BID17, we take one additional step in all dimensions considered therein: more complex neural architectures, more complex data sets, and more involved regularization and normalization schemes. In this work we study the GAN landscape: losses, regularization and normalization schemes, and neural architectures, and their impact on the on the quality of generated samples which we assess by recently introduced quantitative metrics. Our fair and thorough empirical evaluation suggests that one should consider non-saturating GAN loss and spectral normalization as default choices when applying GANs to a new data set. Given additional computational budget, we suggest adding the gradient penalty from BID9 and train the model until convergence. Furthermore, additional marginal gains can be obtained by combining normalization and regularization empirically confirming the importance of the Lipschitz constant of the discriminator. Furthermore, both types of architectures proposed up-to this point perform reasonably well. A separate ablation study uncovered that most of the tricks applied in the ResNet style architectures lead to marginal changes in the quality and should be avoided due to the high computational cost. As a of this large-scale study we identify the common pitfalls standing in the way of accurate and fair comparison and propose concrete actions to demystify the future -issues with metrics, data set preprocessing, non-determinism, and missing implementation details are particularly striking. We hope that this work, together with the open-sourced reference implementations and trained models, will serve as a solid baseline for future GAN research. We present an empirical study with SNDCGAN and ResNet CIFAR architectures on CIFAR10 in figure 5 and figure 6. In addition to Section 3.1, we evaluate one more kind of loss on CIFAR10. Here HG, NS and WGAN stand for hinge loss, non saturating loss and Wasserstein loss respectively. We observe that hinge loss performs very similar to non-saturating loss. DISPLAYFORM0 The KID metric introduced by BID3 is an alternative to FID. We use models from our Regularization and Normalization study (see Section 3.2) to compare both metrics. Here, by model we denote everything that needs to be specified for the training -including all hyper-parameters, like learning rate, λ, Adam's β, etc. The Spearman rank-order correlation coefficient between KID and FID scores is approximately 0.994 for LSUN-BEDROOM and 0.995 for CELEBA-HQ-128 datasets. To evaluate a practical setting of selecting several best models, we compare the intersection between the set of "best K models by FID" and the set of "best K models by KID" for K ∈ 5, 10, 20, 50, 100. The are summarized in TAB2.This experiment suggests that FID and KID metrics are very strongly correlated, and for the practical applications one can choose either of them. Also, the from our studies based on FID should transfer to studies based on KID. We used the same architecture as BID19, with the parameters copied from the GitHub page 5. In TAB3, we describe the operations in layer column with order. Kernel size is described in format [f ilter h, f ilter w, stride], input shape is h × w and output shape is h × w × channels. The slopes of all lReLU functions are set to 0.1. The input shape h × w is 128 × 128 for CELEBA-HQ-128 and LSUN-BEDROOM, 32 × 32 for CIFAR10. The ResNet19 architecture is described in TAB4. RS column stands for the resample of the residual block, with downscale(D)/upscale(U)/none(-) setting. MP stands for mean pooling and BN for batch normalization. ResBlock is defined in TAB5. The addition layer merges two paths by adding them. The first path is a shortcut layer with exactly one convolution operation, while the second path consists of two convolution operations. The downscale layer and upscale layer are marked in TAB5. We used average pool with kernel for downscale, after the convolution operation. We used unpool from https://github.com/tensorflow/ tensorflow/issues/2169 for upscale, before convolution operation. h and w are the input shape to the ResNet block, output shape depends on the RS parameter. ci and co are the input channels and output channels for a ResNet block. TAB6 described the ResNet CIFAR architecture we used in Figure 5 for reproducing the existing . Note that RS is set to none for third ResBlock and fourth ResBlock in discriminator. In this case, we used the same ResNet block defined in TAB5 without resampling. DISPLAYFORM0 ReLU, MP --512 (a) ResBlock discriminator DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 ReLU, MP --128 DISPLAYFORM4 We have noticed six minor differences on Resnet architecture comparing to implementation from https: //github.com/pfnet-research/chainer-gan-lib/blob/master/common/net.py BID19. We did ablation study to verify the impact of these differences. FIG7 shows the impact of the ablation study, with details described as following.• DEFAULT: ResNet CIFAR architecture with spectral normalization and non-saturating GAN loss.• SKIP: Use input as output for the shortcut connection in the discriminator ResBlock. By default it was a conv layer with 3x3 kernel.• CIN: Use ci for the discriminator ResBlock hidden layer output channels. By default it was co in our setup, while BID19 used co for first ResBlock and ci for the rest.• OPT: Use an optimized setup for the first discriminator ResBlock, which includes: no ReLU, a conv layer for the shortcut connections, use co instead of ci in ResBlock.• CIN OPT: Use CIN and OPT together. It means the first ResBlock is optimized while the remaining ResBlocks use ci for the hidden output channels.• SUM: Use reduce sum for the discriminator output. By default it was reduce mean.• TAN: Use tanh for the generator output, as well as range [-1, 1] for discriminator input. By default it was sigmoid and discriminator input range.• EPS: Use a bigger epsilon 2e − 5 for generator batch normalization. By default it was 1e − 5 in TensorFlow.• ALL: Apply all the above differences together. In the ablation study, the CIN experiment obtained the worst FID score. Combining with OPT, the CIN were improved to the same level as the others which is reasonable because the first block has three input channels, which becomes a bottleneck for the optimization. Hence, using OPT and CIN together performs as well as the others. Overall, the impact of these differences are minor according to the study on CIFAR10. DISPLAYFORM5 To make the future GAN training simpler, we propose a set of best parameters for three setups: Best parameters without any regularizer. Best parameters with only one regularizer. Best parameters with at most two regularizers. TAB7 summarize the top 2 parameters for SNDCGAN architecture, ResNet19 architecture and ResNet CIFAR architecture, respectively. Models are ranked according to the median FID score of five different random seeds with fixed hyper-parameters in TAB0. Note that ranking models according to the best FID score of different seeds will achieve better but unstable . Gaussian Process optimization hyper-parameters are not included in this table. For ResNet19 architecture with at most two regularizers, we have run it only once due to computational overhead. To show the model stability, we listed the best FID score out of five seeds from the same parameters in column best. Spectral normalization is clearly outperforms the other normalizers on SNDCGAN and ResNet CIFAR architectures, while on ResNet19 both layer normalization and spectral normalization work well. To visualize the FID score on each data set, Figure 8, Figure 9 and Figure 10 show the generated examples by GANs. We select the examples from the best FID run, and then increase the FID score for two more plots. For each architecture and hyper-parameter we estimate its impact on the final FID. Figure 11 presents heatmaps for hyperparameters, namely the learning rate, β1, β2, n disc, and λ for each combination of neural architecture and data set. We used the MS-SSIM scorer from TensorFlow with default power factors . Note that the default filter size for each scale layer is 11, the minimum image edge is 11 × 2 4 = 176. To adapt it to CELEBA-HQ-128 data set with size 128 × 128, we used the minimum of filter size 11 and image size in last scale layer to allow the computation followed the previous work BID7. | A sober view on the current state of GANs from a practical perspective | 1,258 | scitldr |
As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our approach generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweights irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare our approach with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches. and deep sequential models . However, interpretability for RL-based agents has received significantly less attention. Interpreting the strategies learned by RL agents can help users better understand the problem that the agent is trained to solve. For instance, interpreting the actions of a chess-playing agent in a position could provide useful information about aspects of the position. Interpretation of RL agents is also an important step before deploying such models to solve real-world problems. Inspired by the popularity and use of saliency maps to interpret in computer vision, a number of existing approaches have proposed similar methods for reinforcement learning-based agents. derive saliency maps that explain RL agent behavior by applying a Gaussian blur to different parts of the input image. They generate saliency maps using differences in the value function and policy vector between the original and perturbed state. They achieve promising on agents trained to play Atari games. compute saliency maps using a difference in the action-value (Q(s, a)) between the original and perturbed state. There are two primary limitations to these approaches. The first is that they highlight features whose perturbation affects actions apart from the one we are explaining. This is illustrated in Figure 1, which shows a chess position (it is white's turn). Stockfish 1 plays the move Bb6 in this position, which traps the black rook (a5) and queen (c7) 2. The knight protects the white bishop on a4, and hence the move works. In this position, if we consider the saliency of the white queen (square d1), then it is apparent that the queen is not involved in the tactic and hence the saliency should be low. However, perturbing the state (by removing the queen) leads to a state with substantially different values for Q(s, a) and V (s). Therefore, existing approaches ) mark the queen as salient. The second limitation is that they highlight features that are not relevant to the action to be explained. In Figure 1c, perturbing the state by removing the black pawn on c6 alters the expected reward for actions other than the one to be explained. Therefore, it alters the policy vector and is marked salient. However, the pawn is not relevant to explain the move played in the position (Bb6). In this work, we propose a perturbation based approach for generating saliency maps for black-box agents that builds on two desired properties of action-focused saliency. The first, specificity, captures the impact of perturbation only on the Q-value of the action to be explained. In the above example, this term downweights features such as the white queen that impact the expected reward of all actions equally. The second, relevance, downweights irrelevant features that alter the expected rewards of actions other than the action to be explained. It removes features such as the black pawn on c6 that increase the expected reward of other actions (in this case, Bb4). By combining these aspects, we generate a saliency map that highlights features of the input state that are relevant for the action to be explained. Figure 1 illustrates how the saliency map generated by our approach only highlights pieces relevant to the move, unlike existing approaches. We use our approach to explain the actions taken by agents for board games (Chess and Go), and for Atari games (Breakout, Pong and Space Invaders). Using a number of illustrative examples, we show that our proposed approach obtains more focused and accurate interpretations for all of these setups when compared to and. We also demonstrate that our approach is more effective in identifying important pieces in chess puzzles, and further, in aiding skilled chess players to solve chess puzzles (improves accuracy of solving them by nearly 25% and reduces the time taken by 31% over existing approaches). We are given an agent M, operating on a state space S, with the set of actions A s for s ∈ S, and a Q-value function denoted as Q(s, a) for s ∈ S, a ∈ A s. Following a greedy policy, let the action that was selected by the agent at state s beâ, i.e.â = arg max a Q(s, a). The states are parameterized in terms of state-features F. For instance, in a board game such as chess, the features are the 64 squares. For Atari games, the features are pixels. We are interested in identifying which features of the state s are important for the agent in taking actionâ. We assume that the agent is in the exploitation phase 1 https://stockfishchess.org/ 2 We follow the coordinate naming convention where columns are'a-h' (left-right), rows'8-1' (top-bottom), and pieces are labeled using the first letter of its name in upper case (e.g. 'B' denotes the bishop). A move consists of the piece and the position it moves to, e.g.'Bb6' indicates that the bishop moves to position'b6'. and therefore plays the action with the highest expected reward. This feature importance is described by an importance-score or saliency for each feature f, denoted by S, where S[f] ∈ denotes the saliency of the f th feature of s for the agent taking actionâ. A higher value indicates that the f th feature of s is more important for the agent when taking actionâ. The general outline of perturbation based saliency approaches is as follows. For each feature f, first perturb s to get s. For instance, in chess, we can perturb the board position by removing the piece in the f th square. In Atari, perturb the input image by adding a Gaussian blur centered on the f th pixel. Second, query M to get Q(s, a) ∀a ∈ A s ∩ A s'. We take the intersection of A s and A s to represent the case where some actions may be legal in s but not in s and vice versa. For instance, when we remove a piece in chess, actions that were legal earlier may not be legal anymore. In the rest of this section, when we use "all actions" we mean all actions that are legal in both the states s and s. Finally, compute S[f] based on how different Q(s, a) and Q(s, a)) are, i.e. intuitively, S[f] should be higher if Q(s, a) is significantly different from Q(s, a). compute the saliency map using 2, while s,â). In this work, we will propose an alternative approach to compute S[f]. Properties We define two desired properties of an accurate saliency map for policy-based agents: 1. Specificity: Saliency S[f] should focus on the effect of the perturbation specifically on the action being explained,â, i.e. it should be high if perturbing the f th feature of the state reduces the relative expected reward of the selected action. Stated another way, S[f] should be high if Q(s,â) − Q(s,â) is substantially higher than Q(s, a) − Q(s, a), a =â. For instance, in figure 1, removing pieces such as the white queen impact all actions uniformly (Q(s, a) − Q(s, a) is roughly equal for all actions). Therefore, such pieces should not be salient for explainingâ. On the other hand, removing pieces such as the white knight on a4 specifically impacts the move (â =Bb6) we are trying to explain (Q(s, Bb6) − Q(s, Bb6) Q(s, a) − Q(s, a) for other actions a). Therefore, such pieces should be salient forâ. Figure 1, removing the black pawn on c6 increases the expected reward of other actions (in this case, Bb4). However, it does not effect the expected reward of the action to be explained (Bb6). Therefore, the pawn is not salient for explaining the move. In general, such features that are irrelevant toâ should not be salient. Existing approaches to saliency maps do not capture these properties in how they compute the saliency. Both the saliency approaches used in are not focusing on the action-specific effects since they aggregate the change over all actions. Although the saliency computation in is somewhat more specific to the action, i.e. S[f] = Q(s,â) − Q(s,â), it is ignoring whether the effects on Q are relevant only tô a, or effect all the other actions as well. This is illustrated in Figure 1. To focus on the effect of the change on the action, we are interested in whether the relative returns ofâ change with the perturbation. Using Q(s,â) directly, as in , does not capture the relative changes to Q(s, a) for other actions. To support specificity, we use the softmax over Q-values to normalize the values (as is also used in softmax action selection): and compute ∆p = P (s,â) − P (s,â), the difference in the relative expected reward of the action to be explained between the original and the perturbed state. A high value of ∆p thus implies that the feature is important for the specific choice of actionâ by the agent, while a low value indicates that the effect is not specific to the action. Identifying Relevant Changes Apart from focusing on the change in Q(s,â), we also want to ensure that the perturbation leads to minimal effect on the relative expected returns for other actions. To capture this intuition, we will compute the relative returns of all other actions, and compute saliency in proportion to their similarity. Specifically, we normalize the Q-values using a softmax apart from the selected actionâ. We use the KL-Divergence D KL = P rem (s, a)||P rem (s, a) to measure the difference between P rem (s, a) and P rem (s, a). A high D KL indicates that the relative expected reward of taking some actions (other than the original action) changes significantly between s and s. In other words, a high D KL indicates that the effect of the feature is spread over other actions, i.e. the feature may not be relevant for the selected actionâ. Computing the Saliency To compute salience S[f], we need to combine ∆p and should be low, regardless of whether ∆p is high; the perturbation is affecting many other actions. Conversely, when D KL is low, S[f] should depend on ∆p. To be able to compare these properties on a similar scale, we define a normalized measure of distribution similarity K using D KL: As D KL goes from 0 to ∞, K goes from 1 to 0. Thus, S[f] should be low if either ∆p is low or K is low. Harmonic mean provides this desired effect in a robust, smooth manner, and therefore we define S[f] to be the harmonic mean of ∆p and K: Equation 4 captures our desired properties of saliency maps. If perturbing the f th feature affects the expected rewards of all actions uniformly, then ∆p is low and subsequently S[f] is low. This low value of ∆p captures the property of specificity defined above. If perturbing the f th feature of the state affects the rewards of some actions other than the action to be explained, then D KL is high, K is low, and S[f] is low. This low value of K captures the property of relevance defined above. To show that our approach produces more meaningful saliency maps than existing approaches, we use sample positions from Chess, Atari (Breakout, Pong and Space Invaders) and Go (Section 3.1). To show that our approach generates saliency maps that provide useful information to humans, we conduct human studies on problem-solving for chess puzzles (Section 3.2). To automatically compare the saliency maps generated by different perturbation-based approaches, we introduce a Chess saliency dataset (Section 3.3). We use the dataset to show how our approach is better than existing approaches in identifying chess pieces that humans deem relevant in several positions. In Section 3.4, we show how our approach can be used to understand common tactical ideas in chess by interpreting the action of a trained agent. To show that our approach works for black-box agents, regardless of whether they are trained using reinforcement learning, we use a variety of agents. We only assume access to the agent's Q(s, a) function for all experiments. For experiments on chess, we use the Stockfish agent 3. For experiments on Go, we use the pre-trained MiniGo RL agent 4. For experiments on Atari agents and for generating saliency maps for, we use their code and pre-trained RL agents 5. For generating saliency maps using , we use our own implementation 6. All of our code and more detailed are available in our Github repository 6. In this section, we provide examples of generated saliency maps to highlight the qualitative differences between our approach that is action-focused and existing approaches that are not. Chess Figure 1 shows sample positions where our approach produces more meaningful saliency maps than existing approaches for a chess-playing agent (Stockfish). and generate saliency maps that highlight pieces that are not relevant to the move played by the agent. This is because they use differences in Q(s, a), V (s) or the the L 2 norm of the policy vector between the original and perturbed state to calculate the saliency maps. Therefore, pieces such as the white queen that affect the value estimate of the state are marked salient. In contrast, the saliency map generated by our approach only highlights pieces relevant to the move. Atari To show that our approach generates saliency maps that are more focused than those generated by, we compare the approaches on three Atari games: Breakout, Pong, and Space Invaders. Figures 2, 3, and 4 shows the . Our approach highlights regions of the input image more precisely, while the approach highlights several regions of the input image that are not relevant to explain the action taken by the agent. Go Figure 5 shows a board position in Go. It is black's turn. The four white stones threaten the three black stones that are in one row at the top left corner of the board. To save those three black stones, black looks at the three white stones that are directly below the three black ones. Due to another white stone below the three white stones, the continuous row of three white stones cannot be captured easily. Therefore black moves to place a black stone below that single white stone in an attempt to start capturing the four white stones. It takes the next few turns to surround the structure of four white stones with black ones, thereby saving its pieces. The method described in generates a saliency map that highlights almost all the pieces on the board. Therefore, it reveals little about the pieces that the agent thinks are important. On the other hand, the map produced by highlights only a few pieces. The saliency map generated by our approach correctly highlights the structure of four white stones and the black stones already present around them that may be involved in capturing them. Figure 6: ROC curves comparing approaches on the chess saliency dataset To show that our approach generates saliency maps that provide useful information to humans, we conduct human studies on problem-solving for chess puzzles. We show fifteen chess players (ELO 1600-2000) ten chess puzzles from https://www.chess.com (average difficulty ELO 1800). For each puzzle, we show either the puzzle without a saliency map, or the puzzle with a saliency map generated by our approach,, or. The player is then asked to solve the puzzle. We measure the accuracy (number of puzzles correctly solved) and the average time taken to solve the puzzle, shown in Table 1. The saliency maps generated by our approach are more helpful for humans when solving puzzles than those generated by other approaches. We observed that the saliency maps generated by often confuse humans, because they highlight several pieces unrelated to the tactic. The maps generated by highlight few pieces and therefore are marginally better than showing no saliency maps for solving puzzles. To automatically compare the saliency maps generated by different perturbation-based approaches, we introduce a Chess saliency dataset. The dataset consists of 100 chess puzzles 7. Each puzzle has a single correct move. For each puzzle, we ask three human experts (ELO > 2200) to mark the pieces that are important for playing the correct move. We take a majority vote of the three experts to obtain a list of pieces that are important for the move played in the position. The complete dataset is available in our Github repository 7. We use this dataset to compare our approach to existing approaches ). Each approach generates a list of squares and a score that indicates how salient the piece on the square is for a particular move. We scale the scores between 0 and 1 to generate ROC curves. Figure 6a shows the . Our approach generates saliency maps that are better than existing approaches at identifying chess pieces that humans deem relevant in certain positions. To evaluate the relative importance of the two components in our saliency computation (S[f]; Equation 4), we compute saliency maps and ROC curves using each component individually, i.e. S[f] = ∆p or S[f] = K, and compare harmonic mean to other ways to combine them, i.e. using the average, geometric mean, and minimum of ∆p and K. Figure 6b shows the . Combination Figure 7: Saliency maps generated by our approach that demonstrate common tactical motifs in chess of the two properties via harmonic mean leads to more accurate saliency maps than alternative approaches. In this section, we show how our approach can be used to understand common tactical ideas in chess by interpreting the action of a trained agent. Figure 7 illustrates common tactical positions in chess. The corresponding saliency maps are generated by interpreting the moves played by the Stockfish agent in these positions. In Figure 7a, it is white to move. The surprising Rook x d6 is the move played by Stockfish. Figure 7d shows the saliency map generated by our approach. The map illustrates the key idea in the position. Once black's rook recaptures white's rook, white's bishop pins it to the black king. Therefore, white can increase the number of attackers on the rook. The additional attacker is the pawn on e4 highlighted by the saliency map. In Figure 7b, it is white to move. Stockfish plays Queen x h7. A queen sacrifice! Figure 7e shows the saliency map. The map highlights the white rook and bishop, along with the queen. The key idea is that once black captures the queen with his king (a forced move), then the white rook moves to h5 with checkmate. This checkmate is possible because the white bishop guards the important escape square on g6. The saliency map highlights both pieces. In Figure 7c, it is black to move. Stockfish plays the sacrifice rook x d4. The saliency map in Figure 7f illustrates several key aspects of the position. The black queen and light-colored bishop are threatening mate on g2. The white queen protects g2. The white rook on a5 is unguarded. Therefore, once white recaptures the sacrificed rook with the pawn on c3, black can attack both the white rook and queen with the move bishop to b4. The idea is that the white queen is "overworked" or "overloaded" on d2, having to guard both the g2-pawn and the a5-Rook against black's double attack. We are also interested in evaluating the robustness of the generated saliency maps: is the saliency different if non-salient changes are made to the state? To evaluate the robustness of our approach, we perform two irrelevant perturbations to the positions in the chess saliency dataset. First, we pick a random piece amongst the ones labeled non-salient by human experts in a particular position, and remove it from the board. We repeat this for each puzzle in the dataset to generate a new perturbed saliency dataset. Second, we remove a random piece amongst ones labeled non-salient by our approach for each puzzle, creating another perturbed saliency dataset. In order to evaluate the effect of non-salient perturbations on our generated saliency maps, we compute the AUC values for the generated saliency maps, as above, for these perturbed datasets. Since we remove non-salient pieces, we expect the saliency maps and subsequently AUC value to be similar to the value on the original dataset. For both these perturbations, we get an AUC value of 0.92, same as the value on the non-perturbed dataset, confirming the robustness of our saliency maps to these non-relevant perturbations. Since understanding RL agents is important both for deploying RL agents to the real world and for gaining insights about the tasks, a number of different kinds of interpretations have been introduced. A number of approaches generate natural language explanations to explain RL agents (; ;). They assume access to an exact MDP model and that the policies map from interpretable, high-level state features to actions. More recently, analyze execution traces of an agent to extract explanations. A shortcoming of this approach is that it explains policies in terms of hand-crafted state representations that are semantically meaningful to humans. This is often not practical for board games or Atari games where the agents learn from raw board/visual input. apply t-SNE on the last layer of a deep Q-network (DQN) to cluster states of behavior of the agent. They use Semi-Aggregated Markov Decision Processes (SAMDPs) to approximate the black box RL policies. They use the more interpretable SAMDPs to gain insight into the agent's policy. An issue with the explanations is that they emphasize t-SNE clusters that are difficult to understand for non-experts. To build user trust and increase adoption, it is important that the insight into agent behavior should be in a form that is interpretable to the untrained eye and obtained from the original policy instead of a distilled one. Most relevant to our approach are the visual interpretable explanations of deep networks using saliency maps. Methods for computing saliency can be classified broadly into two categories. Gradient-based methods identify input features that are most salient to the trained DNN by using the gradient to estimate their influence on the output. use gradient magnitude heatmaps, which was expanded upon by more sophisticated methods to address their shortcoming, such as guided backpropagation , excitation backpropagation , DeepLIFT , GradCAM (, and GradCAM++ . Integrate gradients provide two axioms to further define the shortcomings of these approaches: sensitivity (relative to a baseline) and implementation invariance, and use them to derive an approach. Nonetheless, all gradient-based approaches still depend on the shape in the immediate neighborhood of a few points, and conceptually, use perturbations that lack physical meaning, making them difficult to use and vulnerable to adversarial attacks in form of imperceivable noise . Further, they are not applicable to scenarios with black-box access to the agent, and even with white-box access to model internals, they are not applicable when agents are not fully differentiable, such as Stockfish for chess. We are more interested in perturbation-based methods for black-box agents: methods that compute the importance of an input feature by removing, altering, or masking the feature in a domain-aware manner and observing the change in output. It is important to choose a perturbation that removes information without introducing any new information. As a simple example, consider a classifier that predicts'True' if a certain input image contains a bird and'False' otherwise. Removing information from the part of the image which contains the bird should change the classifier's prediction, whereas removing information from other areas should not. Several kinds of perturbations have been explored, e.g.; remove information by replacing a part of the input with a gray square. Note that these approaches are implementation invariant by definition, and are sensitive with respect to the perturbation function. Existing perturbation-based approaches for ), however, by focusing on the complete Q (or V), tend to produce saliency maps that are not specific to the action of interest. Our proposed approach addresses this by measuring the impact only on the action being selected, ing in more focused and useful saliency maps, as we show in our experiments. Saliency maps focus on visualizing the dependence between the input and output to the model, essentially identifying the situation-specific explanation for the decision. Although such local explanations have applications in understanding, debugging, and developing trust with machine learning systems, they do not provide any direct insights regarding the general behavior of the model, or guarantee that the explanation is applicable to a different scenario. Attempts to provide a more general understanding of the model include carefully selecting prototype explanations to show to the user (van der) and crafting explanations that are precise and actionable . We will explore such ideas for the RL setting in future, to provide explanations that accurately characterize the behavior of the policy function, in a precise, testable, and intuive manner. There are a number of limitations of our approach to generating saliency maps in our current implementation. First, we perturb the state by removing information (removing pieces in Chess/Go, blurring pixels in Atari). Therefore, our approach cannot highlight the importance of absence of certain attributes, i.e. saliency of certain positions being empty. In games such as Chess and Go, an empty square or file (collection of empty squares) can often be important for a particular move. Future work will explore perturbation functions that add information to the state (perhaps in the form of adding pieces in Chess/Go). Such functions, along with our approach, can be used to calculate the importance of empty squares. Second, it is possible that perturbations may explore states that lie outside the manifold, i.e. they lead to invalid states. For example, unless explicitly prohibited like we do, our approach will compute the saliency of the king pieces by removing them, which is not allowed in the game, or remove the paddle from Pong. In future, we will explore strategies that take the valid state space into account when computing the saliency. Last we are estimating the saliency of each feature independently, ignoring feature dependencies and correlations, which may lead to incorrect saliency maps. We will investigate approaches that perturb multiple features to quantify the importance of each feature , and combine them with our approach to explaining black-box policy-based agents. We presented a perturbation-based approach that generates more focused saliency maps than existing approaches by balancing two aspects (specificity and relevance) that capture different desired characteristics of saliency. We showed through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches. The of our technique show that saliency can provide meaningful insights into a black-box RL agent's behavior. For experiments on Go, we use the pre-trained MiniGo RL agent: https://github.com/ tensorflow/minigo. This agent was trained using the AlphaGo Algorithm. It also adds features and architecture changes from the. For experiments on Atari agents and for generating saliency maps for, we use their code and pre-trained RL agents available at https://github.com/greydanus/ visualize_atari. These agents are trained using the Asynchronous Advantage Actor-Critic Algorithm (A3C) . For generating saliency maps using , we use our implementation. All of our code and more detailed are available in our Github repository: https://github.com/ rl-interpretation/understandingRL. For chess and Go, we perturb the board position by removing one piece at a time. We do not remove a piece if the ing position is illegal. For instance, in chess, we do not remove the king. For Atari, we use the perturbation technique described in. The technique perturbs the input image by adding a Gaussian blur localized around a pixel. The blur is constructed using the Hadamard product to interpolate between the original input image and a Gaussian blur. The saliency maps for Atari agents have been computed on the frames provided by in their code repository. The puzzles for conducting the Chess human studies, creating the Chess Saliency Dataset, and providing illustrative examples have been taken from Lichess: https://database.lichess. org/. The puzzles for illustrative examples on Go have been taken from OnlineGo: https: //online-go.com/puzzles. Figure 8 shows the saliency maps generated by our approach for the top 3 moves in a chess position. The maps highlight the different pieces that are salient for each move. For instance, Figure 8a shows that for the move Qd4, the pawn on g7 is important. This is because the queen move protects the pawn. For the saliency maps in Figures 8b and 8c, the pawn on g7 is not highlighted. To show that our approach generates meaningful saliency maps in Chess for RL agents, we interpret the LeelaZero Deep RL agent https://github.com/leela-zero/leela-zero. Figure 9 shows the . As discussed in Section 1, the saliency maps generated by | We propose a model-agnostic approach to explain the behaviour of black-box deep RL agents, trained to play Atari and board games, by highlighting relevant features of an input state. | 1,259 | scitldr |
To understand the inner work of deep neural networks and provide possible theoretical explanations, we study the deep representations through the untrained, random weight CNN-DCN architecture. As a convolutional AutoEncoder, CNN indicates the portion of a convolutional neural network from the input to an intermediate convolutional layer, and DCN indicates the corresponding deconvolutional portion. As compared with DCN training for pre-trained CNN, training the DCN for random-weight CNN converges more quickly and yields higher quality image reconstruction. Then, what happens for the overall random CNN-DCN? We gain intriguing that the image can be reconstructed with good quality. To gain more insight on the intermediate random representation, we investigate the impact of network width versus depth, number of random channels, and size of random kernels on the reconstruction quality, and provide theoretical justifications on empirical observations. We further provide a fast style transfer application using the random weight CNN-DCN architecture to show the potential of our observation. Deep neural networks have achieved impressive performance on various machine learning tasks. However, our understanding of how these deep learning models operate remains limited. Providing a theoretical explanation or empirical interpretation for their success is an important research area. Existing works Arora et al. (2015; 2014); propose mathematical models for learning architectures, however, the theoretical analysis of which fails to capture the state-of-the-art architectures.; leverage either compressive sensing or ordinary differential equations to facilitate the understanding of CNNs.; deliver rigorous proofs about the invertibility of convolutional generative models. Despite these promising progress, there is no solid theoretical foundation on why the overall random CNN-DCN architecture is capable for image reconstruction. In this paper, we bridge the gap between the empirical observation and theoretical explanation of CNNs, especially the invertibility of the overall random CNN-DCN architecture. To understand the deep representations of intermediate layers, a variety of visualization techniques have been developed in order to unveil the feature representation and hence the inner mechanism of convolutional neural networks (CNNs);;;. In this work we propose applying randomization on deconvolutional networks (DCNs) for a systematic investigation of deep representations, and provide insights on the intrinsic properties of deep convolutional networks. We first observe that training the DCN for reconstruction, the random CNN preserves richer information in the feature space. The training on DCN converges faster for the random CNN contrasted to pre-trained CNN and yields higher quality image reconstruction. It indicates there is rich information encoded in the random features; the pre-trained CNN discards some information irrelevant for classification and encodes relevant features in a way favorable for classification but harder for reconstruction. This leads us to be curious about what happens if we feed the images to a CNN-DCN architecture where both the CNN and the DCN have random weights. Our motivation for studying the overall random CNN-DCN architecture is threefold. First, a series of works empirically showed that a certain feature learning architecture with random weights allowed satisfactory discriminative validity on object recognition tasks , and certain convolutional pooling architectures even with random weights can be inherently frequency selective and translation invariant, leading to the potential application of fast search of network architectures. Second, studying a complex system with random weights rather than learned determin-istic ones may lead to a better understanding of the system even in the learned case. For example, in the field of compressed sensing, random sampling leads to breakthroughs in the understanding of the number of required measurements for a stable reconstruction of the signal;. For highly complicated systems with nonlinear operations along the hidden layers, there are already some investigations on random deep neural networks;; Ulyanov et al. (2017a). Third, as a reversible encoder-decoder architecture, deconvolution is a valuable visualization technique for studying the feature representation of deep convolutional nets. To our knowledge there is no existing work on the random deconvolutional networks in the literature. Our work on using deconvolution to study the random intermediate features of CNN provides new insights and inspires possible applications with untrained deep neural models. Our main and contributions are as follows. We study the overall random CNN-DCN architecture to investigate the randomness in deconvolutional networks, i.e. there is no training at all for inverting the inputs that passes their information through a random weight convolutional network. Surprisingly, the image is inverted with satisfactory quality. The geometric and photometric features of the inputs are well preserved given a sufficient number of channels. We provide empirical evidence as well as theoretical analysis on the reconstruction quality, and bound the error in terms of the number of random nonlinearities, the network architecture, the distribution of the random weights, and local similarity of the input which is high for natual images. Extensive empirical study by varying the network width, depth, or kernel size has been performed to show the effectiveness on the inversion. The CNN-DCN architecture with random weights can be very useful on texture synthesis, style transfer, image segmentation, image inpainting, etc. As an example, we illustrate how fast style transfer can be applied using random weight CNN-DCN architecture. Note that our approach can save a big amount of time and energy as we do not need to do the pre-training on deep models, and it is very flexible as we can easily try whatever nerual network architecture as we wish. Two techniques are closely related to our work, deconvolution and randomization. Deconvolution involves a CNN-DCN architecture, where CNN indicates the portion of a convolutional neural network from the input to an intermediate convolutional layer, and DCN indicates the corresponding deconvolutional network aiming to invert the intermediate features to the original images. Randomization indicates the stochastic assignment of weights to the deep neural network. As a generative model for encoder-decoder functions, deconvolutional networks (DCNs) are commonly used for deep feature visualization. propose to use a multi-layered deconvolutional network to project the feature activations back to the input pixel space, and show that the features have many intuitively desirable properties such as compositionality, increasing invariance and class discrimination for deeper layers. design a deconvolution variant to invert image representations learned from a pre-trained CNN, and conclude that features in higher layers preserve colors and rough contours of the images and discard information irrelevant for the classification task that the convolutional model is trained on. As there is no back propagation, their reconstruction is much quicker than the representation inverting method on gradient descent. Randomization on neural networks can be tracked back to the 1960's where the bottom-most layer of shallow networks consisted of random binary connections. In recent years, largely motivated by the fact that "randomization is computationally cheaper than optimization", randomization has been resurfacing repeatedly in the machine learning literature. For optimization problems such as regression or classification, this technique is used to stochastically assign a subset of weights in a feedforward network to derive a simpler optimization problem;. Specifically, they compute a weighted sum of the inputs after passing them through a bank of arbitrary randomized nonlinearities, such that the ing optimization task is formulated as a linear least-squares problem. Empirical comparisons as well as theoretical guarantees are provided for the approximation Rahimi & Recht (2008; 2009);. Other related works include random kernel approximation; and reservoir computing on random recurrent networks;. Specifically on convolutional neural networks (CNNs), there are a few works considering randomization. observe that, on a one-layer convolutional pooling architecture, random weights perform only slightly worse than pre-trained weights. prove that certain convolutional pooling architectures with random weights are inherently frequency selective and translation invariant, and argue that these properties underlie their performance. accomplish three popular visualization tasks, image inversion, texture synthesize and style transfer, using random weight CNNs. extend the scope from fully-connected and convolutional networks and prove that random networks induce representations which approximate the kernel space. combine compressive sensing with random-weight CNNs to investigate the CNN architectures. Dmitry et al. Ulyanov et al. (2017a) utilize randomly-initialized neural nets to finish denoising and inpainting tasks. Motivated by the intuition that "random net is theoretically easier to comprehend than the complicated well-trained net", and that it may reveal the intrinsic property of the network architecture, we use randomization to explore the convolution followed by deconvolution architecture, provide theoretical analysis on empirical observations, and show its application potentials by a style transfer case study. For the network architecture, we focus on VGG16 for the deconvolution. A convolutional layer is usually followed by a pooling layer, except for the last convolutional layer. For consistency, we will explore the "feature representation" after the convolutional layer but before the pooling layer. We build a CNN-DCN architecture on the layer of the feature representation to be studied. The convolution operator of a deconvolutional layer in DCN is the same as the convolution operator in CNN, and an upsampling operator is applied in DCN to invert the corresponding pooling operator in CNN. We will focus on the representations of the convolutional layers, and We first explore the reconstruction ability of random CNNs. We assign Gaussian random weights to the CNN part, and train the corresponding DCN to minimize the summation of the pixel-wise loss on the reconstructed images. Training. For each intermediate layer, using the feature vectors of all training images, we train the corresponding DCN such that the summation of L 2 -norm loss between the inputs and the outputs is minimized. Let Φ(x i, w) represent the output image of the DCN, in which x i is the i th input image and w the weights of the DCN. We train the DCN to get the desired weights w * that minimize the loss. Then for a feature vector of a certain layer, the corresponding DCN can predict an estimation of the expected pre-image, the average of all natural images which would have produced the current feature vector. training. The weight decay is set to 0.0004 to avoid overfitting. The maximum number of iterations is set at 200,000 empirically. We also consider another network architecture,. For the random weights, we try several Gaussian distributions with zero mean and various variance. We also try several other types of random distributions, Uniform, Logistic, Laplace, to have a sound exploration. See more details and comparisons in Appendix 2. In the following, we use CDk to represent a Conv[k]-DeConv[k] architecture. Take the VGG CD2 for elaboration, the loss curves during the training process are shown in Figure 10, which compares VGG and AlexNet on random as well as pre-trained weights. Here Conv2_Pretrained or Conv2_Random indicates whether the CNN is pre-trained or with random weights. We see that the training of DCN for reconstruction converges much quicker on random CNN and yields slightly lower loss. It indicates that by pre-training for classification, CNN encodes relevant features of the input image in a way favorable for classification but harder for reconstruction. And VGG has a much lower reconstruction loss than AlexNet. Reconstruction. We take 5,000 samples from the training set and validation set respectively from ImageNet, and compare their average reconstruction loss. The statistics are shown in Figure 11 ("Pre-trained net" represents pre-trained CNN while "random net" represents random CNN when we train the corresponding DCN for reconstruction). We see the pre-trained CNN and random CNN both have good generalization ability; a random VGG yields much less loss than the pre-trained VGG for the deconvolution reconstruction; for representations of deeper layers, the inverting loss increases significantly for pre-trained VGG but grows slowly for random VGG. The indicate that the random CNN encodes much richer information of the original images; the pre-trained CNN discards information not crucial for classification, especially on deeper layers, leading to a better classifier but a harder reconstruction task. Figure 4 shows reconstructions of various layers of the random VGG on images outside the training set. The reconstruction quality decays for intermediate representations of deeper layers. The VGG structure with random weights yields accurate reconstruction, even on CD5, which involves 26 convolution layers and 4 pooling layers. In Appendix 2, we also see that the reconstruction quality on VGG based deconvolution is better than that on AlexNet based deconvolution. The above inspire us to further explore what happens if both the CNN and DCN are of random weights. In this section we consider the reconstructions on purely random VGG CNN-DCN architecture (denoted by rrVGG for brevity), and find that the images can still be reconstructed with satisfactory quality! In other words, the CNN randomly extracts the image features and passes them to the DCN, then in an unsupervised manner the DCN reconstructs the input image by random feature extraction! Such intriguing show that the overall random CNN-DCN architecture substantially contributes to the geometric and photometric invariance for the image inversion. In the following, we will systematically explore the reconstruction ability of the rrVGG architecture with ReLU nonlinearity. We found that the network depth has a bigger impact than the network width, and the reconstruction quality decays with deeper layers; with plenty number of channels, an increasing number of random channels promotes the reconstruction quality; and the reconstruction quality decays with a larger kernel size. For evaluation, we use the structural similarity (SSIM) index , which is accurate by considering the correlation and dependency of local spatially close pixels, and consistent to the perceptron of human eyes. To remove the discrepancy on colors, we transform the inputs and outputs in grey-scale, and in case of negative SSIM value, we invert the luminosity of the grayscale image for calculation, so the value is in. A higher value indicates a higher similarity on the images. We first explore the impact of network depth and network width for the random reconstruction, using a cat image outside the training data as an example. The weights are random in N (0, 0.1) 1. We first study the reconstruction quality for different convolutional layers, as in Figure 5. Though there is no training at all, DCN can still perceive geometric positions and contours for CD1 to CD3. The deeper the random representations are, the coarser the reconstructed image is. We can still perceive a very rough contour for the random CD4 architecture, which is already 10 layers deep. Our follow-up theoretical analysis will show that depth does affect the , as it affects the size of receptive fields. In Figure 6, we build a Conv1-DeConv1 (CD1) architecture with different dimensions (width) using the actual width of VGG Conv1 to Conv5 for CD1 respectively. We see that the smaller the dimension (width) is, the coarser the image is. We investigate the reconstruction quality on the number of random channels using the rrVGG CD1 (Conv1-DeConv1) architecture. For simplicity, for each network instance we use the same number of channels in all layers except the output layer. We vary the number of random channels from 4, 8 up to 2048, and for each number of channels, we generate 30 rrVGG Conv1-DeConv1 networks and all random weights are in N (0, 0.1) distribution. For input images we randomly pick 50 samples from the ImageNet validation set. To reduce occasionality on the reconstruction, we transform the inputs and outputs in grey-scale and calculate the average SSIM value on each network, then we do statistics (mean and standard deviation) on the 30 average values. Figure 7 shows the trends on SSIM when the number of channels increases, (a) is for the original rrVGG network and (b) is for a variant of rrVGG network. The variant of rrVGG is almost the same as the original network except that the last convolutional layer is replaced by an average layer, which calculates the average over all the channels of the feature maps next to the last layer. We see that the increasing number of random channels promotes the reconstruction quality. Similar in spirit to the random forest method, different channels randomly and independently extract some feature from the previous layer. With sufficient number of random channels we may encode and transform all information to the next layer. In the next section, we will prove for the variant convolutional network, when the width of the random neural network goes to infinity, the output will converge to a fixed image close to the original image. In Figure 8, we pick some input images, and show the corresponding output images closest to the mean SSIM value for various number of channels. We transform the randomly-generated colored image to grey-scale image, for the ease of comparing the structural similarity. The SSIM value is on the top of each output image. The increasing number of channels promotes the random reconstruction quality. To show how the reconstruction quality decays with deeper convolutional layers, we also do experiments on the rrVGG CD2 architecture, and the quality decays by about a half as evaluated by SSIM. We expect that the reconstruction quality decays with larger kernel size, as a large kernel size can not consider the local visual feature of the input. In the extreme case when the kernel size equals the image dimension, the convolution operator actually combines all pixel values of the input to an output pixel using random weights. We use the rrVGG Conv1_1 DeConv1_1 architecture, which simply contains two convolutional operators. The random weights are in N (0, 0.1) distribution. For each kernel size, we randomly generate 30 networks for the reconstruction on 50 sample images as selected above. The verifies our assumption, as in Figure 7 (c). To show how our observation can be used for application, we provide a potential application using random CNN-DCN -Style Transfer with rrVGG. By choosing the suitable number of filters, the rrVGG CD1 architecture can achieve high-quality reconstruction. Besides, these reconstructions can also bring slight differences such as the color and texture, which is suited to exploring more interesting style transfer . And multiple rrV GG models can be efficiently acquired without training. Recent work Gatys et al. In this section, we provide theoretical analysis to explain the empirical . We will show that a slight variant of the random CNN architecture has the ability to reconstruct the input image. We also investigate how depth and width of the network will affect the reconstruction ability. Intuitively, content style rrV GG 1 rrV GG 2 rrV GG 3 Figure 9: Style transfer from several rrVGG models. Each model has the same architecture but different random weights. as the depth of the network increases, the receptive field of each output image pixel becomes larger, which makes the reconstruction harder whereas the width of the network, or equivalently, the number of channels, gives more basis or ways to reconstruct the original input. We theoretically show that the reconstruction ability of a random convolutional neural network rises when the number of channels in each layer increases and drops when the depth of the network increases. Note that DCN is also a kind of CNN with up-sampling layers, so our can be directly applied to the CNN-DCN architecture. For the following part, we will first show the convergence of the output image when the width of the network goes to infinity. Many researchers have worked on the infinite width fully connected networks. For instance,; focus on its relationship with Gaussian Process. They show the exact equivalence between infinitely wide deep networks and Gaussian Processes. Our work focuses on the random convolutional neural network without any training, which is different from the above-mentioned works and is in accordance with our previous empirical analysis. Then, we show the difference between the real output and the convergence value as a function of the width. Finally, we give an upper bound on the angle between the input and the convergence value. Thus, we can bound the reconstruction error. Notations: We use A:,j to denote the j th column vector of matrix A and x to denote the l 2 -norm of vector x. Let L be the number of layers in the neural network and X (i) ∈ R Ni×di be the feature maps in the i th layer, where N i is the number of channels and d i is the dimension of a single channel feature map (i.e. the width of the map times its height). is the output image. w (i,j), a row vector, is the j th convolutional filter of the i th layer if it is a convolutional layer. We use ReLU (x) = max(x, 0) as the activation function in the following analysis. Definition 1. Random CNN architecture To make it possible for property proof, this structure is different from the classic CNN structure in the following three points: 1) Different filters in the same layer are i.i.d. random vectors and filters in different layers are independent. The probability density function of each filter is isotropic. Let k 2) The last layer is the arithmetic mean of the channels of the previous layer, not the weighted combination. 3) Except for X (L−1), each layer of convolutional feature maps are normalized by a factor of, where N i is the number of channels of this layer. From the previous experiments, we see that when the number of channels increases, the quality of the output image improves. Here we prove that when the number of channels goes to infinity, the output will actually converge. Each pixel in the final convergence value is a constant times the weighted norm of its receptive field. Formally, we state our main for the convergence value of the random CNN as follows. Theorem 1. (Convergence Value) Suppose all the pooling layers use l 2 -norm pooling. When the number of filters in each layer of a random CNN goes to infinity, the output f corresponding to a fixed input will converge to a fixed image f * with probability 1, where f * = kz * and k is a constant only related to the CNN architecture and the distribution of random filters and z * i = l∈Ri n (l,i) X:,l 2, where R i is the index set of the receptive field of z * i and n (l,i) is the number of routes from the l th pixel of a single channel of the input image to the i th output pixel. The proof of Theorem 1 is in Appendix 3, Theorem 4. Here for the pooling layer, instead of average pooling, which calculates the arithmetic mean, we use l 2 -norm pooling which calculates the norm of the values in a patch. Intuitively, if most pixels of the input image are similar to their adjacent pixels, the above two pooling methods should have similar outputs. See details for average pooling in Appendix 3. Now we consider the case of a finite number of channels in each layer. We mainly focus on the difference between the real output and the convergence value as a function of the number of channels. We prove that for our random CNN architecture, as the number of channels increases, with high probability, the angle between the real output and the convergence value becomes smaller, which is in accordance with the variant rrVGG experiment shown in previous section. Theorem 2. (Multilayer Variance) Suppose all the pooling layers use l 2 -norm pooling. For a random CNN with L layers and N i filters in the i th layer, let Θ denote the angle between the output f and the convergence value f *, suppose that there is at most one route from an arbitrary input pixel to an arbitrary output pixel for simplicity, then with probability 1 − δ, where Here, (i) (x) actually measures the local similarity of x. The full definition of (i) (x) and the proof of this theorem is in Appendix 3. Finally, we focus on how well our random CNN architecture can reconstruct the original input image. From Theorem 2, we know that with high probability, the angle between the output of a random CNN with finite channels and the convergence value will be upper-bounded. Therefore, to evaluate the performance of reconstruction, we focus on the difference between the convergence value and the input image. We will show that if the input is an image whose pixels are similar to their adjacent pixels, then the angle between the input image X and the convergence value of the output image will be small. To show the essence more clearly, we state our for a two-layer random CNN and provide the multi-layer one in Appendix 3, which needs more complicated techniques but has the same insight as the two-layer one. Theorem 3. For a two-layer random CNN, suppose each layer has a zero-padding scheme to keep the output dimension equal to the dimension of the original input. The kernel size is r and stride is 1. The input image is X ∈ R d0, which has only one channel, whose entries are all positive. t = X t − X t means the difference between one pixel X t and the mean of the r-sized image patch whose center is X t. Let Φ be the angle between the input image X and the convergence value of the output image, we have cos The full proof of Theorem 3 is in Appendix 3. Note that when the kernel size r increases, t will become larger as an image only has local similarity, so that the lower bound of the cosine value becomes worse, which explains the empirical in previous section. In this work, we introduce a novel investigation on deep random representations through the convolution-deconvolution architecture, which to our knowledge is the first study on the randomness of deconvolutional networks in the literature. We extensively explore the potential of randomness for image reconstruction on deep neural networks, and found that images can be reconstructed with satisfactory quality when there are a sufficient number of channels. Extensive investigations have been performed to show the effectiveness of the reconstruction. We also provide theoretical analysis that a slight variant of the random CNN architecture has the ability to reconstruct the input image, and the output converges to the input image when the width of the network, i.e. number of channels, goes to infinity. We also bound the reconstruction error between the input and the convergence value as a function of the network width and depth. and. A convolutional layer is usually followed by a pooling layer, except for the last convolutional layer, Conv5. For consistency, we will explore the output after the convolutional layer but before the pooling layer. In what follows, "feature representation" or "image representation" denotes the feature vectors after the linear convolutional operator and the nonlinear activation operator but before the pooling operator for dimension reduction. We build a CNN-DCN architecture on the layer of feature representation to be studied. The convolution operator of a deconvolutional layer in DCN is the same as the convolution operator in CNN, and an upsampling operator is applied in DCN to invert the corresponding pooling operator in CNN, as designed in. We will focus on the representations of the convolutional layers, since build DCNs for each layer of the pre-trained AlexNet and find that the predicted image from the fully connected layers becomes very vague. For the activation operator, we apply the leaky ReLU nonlinearity with slope 0.2, that is, r(x) = x if x ≥ 0 and otherwise r(x) = 0.2x. At the end of the DCN, a final Crop layer is added to cut the output of DeConv1 to the same shape as the original images. We build deconvolutional networks on both VGG16 and AlexNet, and most importantly, we focus on the random features of the CNN structure when training the corresponding DCN. Then we do no training for deconvolution and explore the properties of the purely random CNN-DCN architecture on VGG16. For the random weights assigned to CNN or DCN, we try several Gaussian distributions with zero mean and various variance to see if they have different impact on the DCN reconstruction. Subsequent comparison shows that a small variance around 0.015 yields minimal inverting loss. We also try several other types of random distributions, Uniform, Logistic, Laplace, to study their impact. • The Uniform distribution is in [-0.04, 0.04), such that the interval equals [µ − 3δ, µ + 3δ] where µ = 0 and δ = 0.015 are parameters for Gaussian distribution. • The Logistic distribution is 0-mean and 0.015-scale of decay. It resembles the normal distribution in shape but has heavier tails. • The Laplace distribution is with 0 mean and 2 * λ 2 variance (λ = 0.015), which puts more probability density at 0 to encourage sparsity. For each intermediate layer, using the feature vectors of all training images, we train the corresponding DCN such that the summation of L 2 -norm loss between the inputs and the outputs is minimized. Let Φ(x i, w) represent the output image of the DCN, in which x i is the input of the i th image and w is the weights of the DCN. We train the DCN to get the desired weights w * that minimize the loss. Then for a feature vector of a certain layer, the corresponding DCN can predict an estimation of the expected pre-image, the average of all natural images which would have produced the given feature vector. training. The weight decay is set to 0.0004 to avoid overfitting. The maximum number of iterations is set at 200,000 empirically. Training. We observe similar for the training loss in different layers. Take the Conv2-DeConv2 architecture for elaboration, the loss curves during the training process are shown in Figure 10. Figure 10 (a) compares VGG and AlexNet on random as well as pre-trained weights. The training for reconstruction converges much quicker on random CNN and yields slightly lower loss, and this trend is more apparent on VGG. It indicates that by pre-training for classification, CNN encodes relevant features of the input image in a way favorable for classification but harder for reconstruction. Also, VGG yields much lower inverting loss as compared with AlexNet. Figure 10 (b) shows that random filters of different small-variance Gaussian distributions on CNN affect the initial training loss, but the loss eventually converges to the same magnitude. (The loss curve of N is not included as the loss is much larger even after the converge.) Figure 10 (c) shows that the four different random distributions with appropriate parameters acquire similar reconstruction loss. Generalization. We take 5000 samples from the training set and validation set respectively from ImageNet, and compare their average reconstruction loss. The statistics is as shown in Figure 11, where CDk represents a Conv[k]-DeConv[k] architecture. Figure 11 (a) shows that the VGG architecture is good in generalization for the reconstruction, and random VGG yields much less loss than pre-trained VGG. For representations of deeper layers, the inverting loss increases significantly for pre-trained VGG but grows slowly for random VGG. This means that in deeper layers, the pre-trained VGG discards much more information that is not crucial for classification, leading to a better classifier but a harder reconstruction task. Figure 11 (b) compares VGG and AlexNet on the CD3 architecture. It shows that the reconstruction quality on random compares favourably against that on pre-trained in VGG. Reconstruction. Figure 12 shows reconstructions from various layers of random VGG and random AlexNet, denoted by rwVGG and rwAlexNet respectively. 2 On both rwVGG and rwAlexNet, the reconstruction quality decays for representations of deeper layers. The rwVGG structure yields more accurate reconstruction, even on Conv5, which involves 26 convolution operations and 4 max pooling operations. Figure 13 shows reconstructions from a cat example image for various distributions of rwVGG CD2. Except for N, the reconstruction quality is indistinguishable by naked eyes. It shows that different random distributions work well when we set the random weights relatively sparse. In a nutshell, it is interesting that random CNN can speed up the training process of the DCN on both VGG and AlexNet, obtain higher reconstruction quality and generalize well for other inputs. Regarding weights in the convolutional part as a feature encoding of the original image, then the deconvolutional part can decode from the feature representations encoded by various methods. The fact that the random encoding of CNN is easier to be decoded indicates that the training for classification moves the image features of different categories into different manifolds that are moving further apart. Also, it may discard information irrelevant for the classification. The pre-trained CNN benefits the classification but is adverse to the reconstruction. For completeness, we repeat the notations and the definition of random CNN architecture. Notations: We use A:,j to denote the j th column vector of matrix A and use A ij to denote its entry. Let x i be the i th entry of vector x. Let L be the number of layers in the neural network and X (i) ∈ R Ni×di be the feature maps in the i th layer, where N i is the number of channels and d i is the dimension of a single channel feature map (i.e. the width of the map times its height). X = X is the input image and f = X (L) is the output image. For convenience, we also define convolutional feature maps to be the feature maps after convolutional operation and define pooled feature maps and up-sampled feature maps in the same way. In the i th layer, let r i be the fixed kernel size or the pool size (e.g. 3 × 3). If X (i+1) is convolutional feature maps, let Y (i) ∈ R Niri×di be the patched feature for pooling and up-sampling layers. For the j th pixel of the output image in the last layer, define its receptive filed on the input image in the first layer as X:,Rj = {X :,m | m ∈ R j}, where R j is a set of indexes. The activation function ReLU (x) = max(x, 0) is the element-wise maximum operation between x and 0 and (·) m is the element-wise power operation. Definition 2. Random CNN architecture This structure is different from the classic CNN in the following three points: • Different filters in the same layer are i.i.d. random vectors and filters in different layers are independent. The probability density function of each filter is isotropic. Let k 4 all exist. • The last layer is the arithmetic mean of the channels of the previous layer, not the weighted combination. • Except for X (L−1), each layer of convolutional feature maps are normalized by a factor of, where N i is the number of channels of this layer. A.3.1 CONVERGENCE Theorem 4. (Convergence Value) Suppose all the pooling layers use l 2 -norm pooling. When the number of filters in each layer of a random CNN goes to infinity, the output f corresponding to a fixed input will converge to a fixed image f * with probability 1, where f * = kz * and k is a constant only related to the CNN architecture and the distribution of random filters and z * i = l∈Ri n (l,i) X:,l 2, where n (l,i) is the number of routes from the l th input pixel to the i th output pixel. Here for the pooling layer, instead of average pooling, which calculates the arithmatic mean, we use l 2 -norm pooling which calculates the norm of the values in a patch. We also show the for average pooling in Theorem A.7. To prove the theorem, we first prove the following lemma. Lemma 5. Suppose w ∈ R n, n ≥ 2 is a random row vector and its probability density function is isotropic. Y ∈ R n×d is a constant matrix whose i th column vector is denoted by y i. z ∈ R d is a row vector and where θ ij is the angle between y i and y j. Proof. Note that max{·, ·} and (·) m are both element wise operations. The i th element of Eg m is (Eg m) i = Emax{wy i, 0} m. Since the probability density function of w is isotropic, we can rotate y i to y i without affecting the value of Emax{wy i, 0} m. Let Where the third equality uses the fact that the marginal distribution of w 1 is also isotropic. Similarly, we have: Eg i g j = Emax{wy i, 0}max{wy j, 0}. We can also rotate y i and y j to y i and y j. Let y i = (y i, 0, 0, ..., 0) T and y j = (y j cos θ ij, y j sin θ ij, 0, ..., 0) T and suppose the marginal probability density function of (w 1, w 2) is p(ρ) which does not depend on φ since it is isotropic, where ρ = w 2 1 + w 2 2 is the radial coordinate and φ is the angular coordinate. We have: Note that: We obtain the second part of this lemma. Now, we come to proof of Theorem A.1. Proof. According to Lemma A.2, if X (i+1) is convolutional feature maps, we can directly obtain: where we have fixed Y (i) and the expectation is taken over random filters in the i th layer only. Since different channels in X (i+1) are i.i.d. random variables, according to the strong law of large numbers, we have: which implies that with probability 1, Suppose that all N j for 1 ≤ j ≤ i have gone to infinity and z (i) has converged to z * (i), the above expression is the recurrence relation between z * (i+1) and z * (i) in a convolutional layer: If X (i+1) is up-sampled feature maps, a pixel X (i) jp will be up-sampled to a r-sized block {X jp and all the other elements are zeros. Definẽ D So far, we have obtained the recurrence relation in each layer. In order to get z * (i+1) given z * (i), we use the same sliding window scheme on z * (i) as that of the convolutional, pooling or upsampling operation on the feature maps. The only difference is that in a convolutional layer, instead of calculating the inner product of a filter and the vector in a sliding window, we simply calculate the l 2 -norm of the vector in the sliding window and then multiply it by k (i) 2. Note that z * can be directly obtained from the input image. Repeat this process layer by layer and we can obtain z * (L−2). According to Lemma A.2, we have: Suppose that z (L−2) has converged to z * (L−2), and by Definition A., we have f * = kz *. Note that z * is obtained through a multi-layer sliding window scheme similar to the CNN structure. It only depends on the input image and the scheme. It is easy to verify that z * i is the square root of the weighted sum of the square of input pixel values within the receptive field of the i th output pixel, where the weight of an input image pixel is the number of routes from it to the output pixel. Theorem 6. (Variance) For a two-layer random CNN with N filters in the first convolutional layer, let Θ denote the angle between the output f and the convergence value f *, then with probability 1 − δ, Proof. According to Theorem A.1, we have. For a two-layer CNN, we can directly obtain: Since different channels are i.i.d. random variables, we have EX According to Markov inequality, we have: 1 ) 2 ) z * 2, then with probability 1 − δ: To extend the above two-layer to a multi-layer one, we first prove the following lemma. Note that in this lemma, D (i) should be replaced byD (i) defined in the proof of Theorem A.1 if X (i) is up-sampled feature maps. is a linear mapping. For simplicity, suppose that for 2 for convolutional layers and k (i) = 1 for pooling and up-sampling layers and Proof. According to the definition of φ (i) (·) and Theorem A.1, we have: It is easy to verify that for any c ∈ R and x, y ∈ R di+1 we have x j, which is the average value of the m th patch.. We have: m, which implies that Theorem 8. (Multilayer Variance) Suppose all the pooling layers use l 2 -norm pooling. For a random CNN with L layers and N i filters in the i th layer, let Θ denote the angle between the output f and the convergence value f *, suppose that there is at most one route from an arbitrary input pixel to an arbitrary output pixel for simplicity, then with probability 1 − δ, where Proof. We will bound Θ recursively. Suppose that the angle between (z (i) ) 2 and (z * (i) ) 2 is θ i. We tional feature maps, we have obtained in the proof of Theorem A.1 that Using similar method to the proof of Theorem A.3 and let α i+1 denote the angle between g (i+1) and Eg (i+1), we can derive that with probability 1 − δ i+1, For a l 2 -norm pooling layer or an up-sampling layer, we have:, we have Let v denote the angle between z (L−2) and f, we have obtained its bound in Theorem A.3. With. With all the bounds above, define N by for simplicity, we can obtain the bound of Θ: with probability 1 − δ, Here, R t is the index set of the receptive field of f t and n (α,t) is the number of routes from X α to f t. Suppose that the receptive field of each f t has the same size and shape, X t is at a fixed relative position of the receptive field of f t and n (α,t) only depends on the relative position between X α and X t. Let X t = α∈R t n (α,t) Xα α∈R t n (α,t) be the weighted average and t = X t − X t. By using the same technique above, we can obtain that Note that although the bound is the same as the two-layer convolutional neural network, as the receptive field is enlarged, t can be much larger, so that the above bound will be worse. We also give the convergence value for average pooling in the next theorem. Theorem 10. (Convergence Value, average pooling) Suppose all the pooling layers use average pooling. When the number of filters in each layer of a random CNN goes to infinity, the output f corresponding to a fixed input will converge to a fixed image f * with probability 1. is convolutional feature maps, according to Lemma A.2, we have: where ϕ Note that: Suppose that all N j for 1 ≤ j ≤ i have gone to infinity and C (i) has converged to C * (i), the above expressions are the recurrence relation between C * (i+1) and C is average-pooled feature maps, we have: We have: which is the recurrence relation for an average pooling layer. For an up-sampling layer, a pixel X (i) jk will be up-sampled to a block {X jk and all the other elements are zeros. We have: otherwise. Note that we can directly calculate C * according to the input image. So we can recursively obtain C * (L−2) and thus z * (L−2). According to Lemma A.2, we have: Suppose that z (L−2) has converged to z * (L−2), and by Definition A., we have: f a.s. We can obtain the convergence value f * through the above process. We observe that by choosing a suitable number of random filters, the rrVGG Conv1-DeConv1 architecture can achieve high-quality reconstruction. The reconstructions also bring slight differences in the color and texture, which is suited for exploring more interesting style transfer . Hence we utilize the framework, as shown in , we also adopted the linear combination of both content loss and style loss,. L c indicates the content loss which is the euclidean distance between the content vector V c and stylized feature vector V new, where A i is the activation vector in i-th layer of the FVT. The style loss L s is obtained from the Gram matrix of the feature vectors. In Eq. equation 4, G represents the gram matrix. Inspired by Li et al.Huang & Belongie (2017, we also utilize the the mean value and standard deviation of feature vectors to calculate the style loss, the of which is similar to the gram loss L s . In Fig. 14, FVT (iterative optimization) only contains the convolutional layer from Conv2 to Conv5 and rrVGG contains Conv1 and DeConv1. In addition, we can also utilize more layers on rrVGG and FVT will contain less layers on the optimization network correspondingly, which will further speed up the style transfer process. In experiments, our framework is faster than the original optimization based approach and can transfer the arbitrary styles. As for the stylization effectiveness, we compared our with and Ulyanov et al. Ulyanov et al. (2017b). In Fig. 15, rrV GG 1 and rrV GG 2 columns denote the stylization acquired from our framework, applying two different rrVGG models. As shown in Fig. 15, our stylization are competitive to other well-trained approaches. Focused on rrV GG 1 column, our stylized is inclined to extract more features from the style image and slightly weaken the representation of content image. Since we utilize rrVGG CNN and DCN to complete the transformation between feature space and image space, some content information is possible to be lost during the reconstruction process. Despite that, our approach is still capable of generating high quality stylized images. In addition, we also investigate the stylized effectiveness when modifying the balance between style and content in FVT. As shown in Figure 16, the number below each column indicates the relative weightings between style and content reconstruction. In our framework, the transition from content to style is smooth with increasing ratio. As shown in Figure 17, our stylized is inclined to extract more features from the style image and slightly weaken the representation of the content image. content style rrVGG 1 rrVGG 2 rrVGG 3 Figure 17: Style transfer from several rrVGG models. Each model has the same architecture but different random weights. As proposed in our paper, in terms of different distributions and number of filters, rrVGG can reconstruct images with diverse textures and colors. In Fig. 14, replacing CNN and DCN parts with different rrVGG models, our framework can generate abundant stylized images depending on a single style image. Since rrVGG models are generated without training, it won't incur additional computational cost. As shown in Fig. 17, the rightmost three columns comprise the stylized images with different rrVGG model weights while the leftmost two columns represent input content and style images respectively. For each row, given content and style images, we choose three stylized images generated by our framework using different rrVGG models. For instance, in the 3-rd row of Fig. 17, the parameters of chosen rrVGG models are as following: rrVGG 1:(N (0, 0.01), filter size:3, filter num:128), rrVGG 2:(N (0, 0.01), filter size:5, filter num:256) and rrVGG 3:(N (0, 0.1), filter size:3, filter num:32). As shown in Fig. 17, those stylized images not only well preserve the style structure such as the shape of the curved lines, waves and abstract objects, but also exhibit novel combinations of the structure, shade and hue. Coupled with various rrVGG models, the proposed style transfer framework is able to unleash the diversity and variation inside a single style image, which works well in practice. Meanwhile, it's flexible as well as fast, since the FVT part can be implemented either by an optimization process or some feed-forward convolutional layers. | We investigate the deep representation of untrained, random weight CNN-DCN architectures, and show their image reconstruction quality and possible applications. | 1,260 | scitldr |
The current trade-off between depth and computational cost makes it difficult to adopt deep neural networks for many industrial applications, especially when computing power is limited. Here, we are inspired by the idea that, while deeper embeddings are needed to discriminate difficult samples, a large number of samples can be well discriminated via much shallower embeddings. In this study, we introduce the concept of decision gates (d-gate), modules trained to decide whether a sample needs to be projected into a deeper embedding or if an early prediction can be made at the d-gate, thus enabling the computation of dynamic representations at different depths. The proposed d-gate modules can be integrated with any deep neural network and reduces the average computational cost of the deep neural networks while maintaining modeling accuracy. Experimental show that leveraging the proposed d-gate modules led to a ~38% speed-up and ~39% FLOPS reduction on ResNet-101 and ~46% speed-up and $\sim$36\% FLOPS reduction on DenseNet-201 trained on the CIFAR10 dataset with only ~2% drop in accuracy. Past studies such as BID15 have shown that deeper architectures often lead to better modeling performance; however, deeper architectures also pose a number of issues. Besides becoming more prone to overfitting and becoming more difficult to train, the trade-off between depth and computational cost makes it difficult to adopt deeper architectures for many industrial applications. He et al. BID6 tackled the former issue of degradation in learning deeper neural networks (e.g., vanishing gradient) by introducing the concept of residual learning, where learning is based on the residual mapping rather than directly on the unreferenced mapping. Following that, Xie et al. BID18 took advantage of the inception idea (i.e, split-transform-merge strategy) within a residual block structure to provide better subspace modeling while resolving the degradation problem at the same time, ing in a ResNext architecture with improved modeling accuracy. To tackle the issue of computational cost, a wide variety of methods have been proposed, including: precision reduction BID9, model compression BID5, teacher-student strategies BID7, and evolutionary algorithms BID12 BID13.More recently, conditional computation BID0 BID3 BID11 BID17 BID1 and early prediction BID16 methods have been proposed to tackle this issue, which involve the dynamic execution of different modules within a network. Conditional computation methods have largely been motivated by the idea that residual networks can be considered as an ensemble of shallower networks. As such, these methods take advantage of skip connections to determine which residual modules are necessary to be executed, with most leveraging reinforcement learning. In this study, we explore the idea of early prediction but instead draw inspiration from the soft-margin support vector BID2 theory for decision-making. Specifically, we introduce the concept of decision gates (d-gate), modules trained to decide whether a sample needs to be projected into a deeper embedding or if an early prediction can be made at the d-gate, thus enabling the conditional computation of dynamic representations at different depths. The proposed d-gate modules can be integrated with any deep neural network without the need to train networks from scratch, and thus reduces the average computational complexity of the deep neural networks while maintaining modeling accuracy. Deeper neural network architectures have been demonstrated to provide a better subspace embedding of data when compared to shallower architectures, ing in better discrimination of data space and better modeling accuracy. Inspired by soft-margin support vector BID2 theory, we make a hypothesis that while deeper subspace embeddings are necessary to discriminate samples that lie close to the decision boundaries in the lower embedding space, their effect on samples that already lie far from decision boundaries in the shallower embedding space may be insignificant and unnecessary. Therefore, an effective yet efficient mechanism for determining the distance between samples and the decision boundaries in the lower layers of the network would make it possible to perform early predictions on these samples without projecting them into a deeper embedding space. This approach would reduce the average computational cost of prediction significantly. However, devising an efficient way to determine whether a sample is a boundary sample is a challenging problem. Here, we formulate the early prediction problem as a risk minimization problem, and introduce a set of single-layer feed-forward networks (which we will refer to as decision gates (d-gate)) that are integrated directly into a deep neural network (see FIG0). The goal of d-gate modules is to not only decide whether a sample requires projection into a deep embedding space, but also minimize the risk of early wrong classifications as well. Specifically, we train d-gate modules that are integrated into a deep neural network via a hinge loss BID4 that minimizes the risk of early misclassification in lower embeddings while deciding whether the sample is a boundary sample: DISPLAYFORM0 where y is the ground truth label for the input data x andŷ is the predicted class label via the d-gate module with the set of weights w and biases b. The set of weights w has a dimensionality of f × c, where f denotes the number of input features to the d-gate module and c denotes the number of class labels in the classification task. This d-gate module provides an important benefit where the of w T x − b provides the distances of the sample to the corresponding decision boundary of each class label in the embedding space. Training the d-gate module in this way provides a linear classifier where samples that do not require deeper embeddings for discrimination are those with larger distances (i.e., with the positive sign) from the decision boundary. It is important to note that single layer nature of d-gate modules is designed to account for efficiency. The d-gate module is trained via the training data utilized to train the deep neural network and the objective for each d-gate module is to minimize the classification error on the training data. Therefore, the loss function on the training data can be formulated as: DISPLAYFORM1 where Y denotes the set of ground truth labels for all training data. What is most interesting about L(Y,Ŷ ; w, b) is the fact that L(·) is a convex function of w and b, and as such can be optimized via gradient descent. Therefore, traditional gradient descent can be adapted here, where a step is taken in the direction of a vector selected from the function's sub-gradient BID14 to find the optimized values. As a , the d-gate can be trained within a mini-batch training framework, which makes it is very convenient for utilizing in the training of deep neural networks with large datasets. In essence, the proposed d-gate module can compute the distance of each sample to the decision boundaries based on w T x − b; the calculated distances are compared with the decision threshold t of each d-gate to determine whether early prediction on the sample can be performed at the d-gate or have the sample moved to a deeper stage of the deep neural network to project into a better embedding space for improved prediction. The samples that are far from the decision boundaries to output larger values in w T x − b; therefore, if the d-gate distance for a sample satisfies the d-gate decision threshold, the class corresponding to the largest distance is assigned as the predicted class label for the sample in this early prediction step. formance of the network with decision gates trained via the proposed hinge loss is compared with decision gates trained via a conventional cross-entropy approach. It can be observed that the decision gates trained via the hinge loss provide greater computational efficiency with higher accuracy than if cross-entropy loss is leveraged. The efficacy of the proposed d-gate modules is examined with two different network architectures (ResNet101 BID6 and DenseNet201 BID8) on the CIFAR10 dataset. A key benefit of the proposed d-gate modules is that it enables fine control over the trade-off between modeling accuracy and computational cost by adjusting the d-gate decision thresholds. By decreasing the d-gate decision thresholds, the number of samples undergoing early prediction increases, thus reducing the average computational cost of network predictions greatly. For this study, we integrated two d-gate modules in ResNet-101 (after the first and second main blocks) and DenseNet-201 (after the first and second dense blocks), and explore different d-gate configurations. The networks are implemented in the Pytorch framework and the prediction speeds are reported based on single Nvidia Titan Xp GPU.It can be observed from TAB0 that the computational cost of ResNet network can be reduced by 67 MFLOPS while maintaining the same level of accuracy as to the original ResNet-101 by integrating two d-gate modules with decision thresholds of (t1, t2) = (2.5, 2.5). The integration of d-gate modules can reduce the computational cost of ResNet-101 network by ∼39% (i.e., lower by 1.95 GFLOPS) with 1.7% drop in accuracy compared to the original ResNet-101 (with distance thresholds (t1, t2) = (1.0, 2.0) in d-gate1 and d-gate2), ing in a ∼38% speed-up. The experiments for DenseNet-201 show that it is possible to reduce the number of FLOPs by 970 MFLOPs (∼36% reduction) with only a ∼2% drop in accuracy, leading to a ∼46% speed-up. Furthermore, a 2.3× speed-up can be achieved with d-gate modules compared to the original DenseNet-201 within a 3% accuracy margin. Based on the experimental , the proposed d-gate modules lead to a significant increase in prediction speed, making it well-suited for industrial applications. In addition to the d-gate modules being proposed, one of the key contributions of this paper is the introduction of a hinge loss for training the d-gate modules. Past studies BID10 have argued that crossentropy in a small margin between the decision boundaries and the training data. As such, it is very difficult to trust the confidence values of the Softmax layer to decide about the sample since there is no valuable information in the Softmax output. To demonstrate the effectiveness of the hinge loss leveraged in the proposed d-gates compared to the cross-entropy loss, an additional comparative experiment was conducted. More specifically, two decision gates were added to ResNet101 in the same way as reported. However, rather than train using the proposed hinge loss, the decision gates were instead trained via a cross-entropy loss. This enables us to compare the effect of hinge loss vs. cross-entropy loss on decision gate functionality. FIG1 demonstrates the accuracy vs. number of FLOPs for the network where the decision gates were trained based on the proposed hinge loss approach compared to trained using a regular cross-entropy training procedure. It can be observed that, with the same number of FLOPs in the network, the network where the decision gates were trained based on the proposed hinge loss provides much higher modeling accuracy compared to that trained via cross-entropy loss. The accuracy gap increases exponentially when the decision gates are configured such that the network uses fewer number of FLOPs. What this illustrates is the aforementioned issue with the use of cross-entropy loss and decision boundaries. | This paper introduces a new dynamic feature representation approach to provide a more efficient way to do inference on deep neural networks. | 1,261 | scitldr |
This work provides an additional step in the theoretical understanding of neural networks. We consider neural networks with one hidden layer and show that when learning symmetric functions, one can choose initial conditions so that standard SGD training efficiently produces generalization guarantees. We empirically verify this and show that this does not hold when the initial conditions are chosen at random. The proof of convergence investigates the interaction between the two layers of the network. Our highlight the importance of using symmetry in the design of neural networks. Building a theory that can help to understand neural networks and guide their construction is one of the current challenges of machine learning. Here we wish to shed some light on the role symmetry plays in the construction of neural networks. It is well-known that symmetry can be used to enhance the performance of neural networks. For example, convolutional neural networks (CNNs) (see) use the translational symmetry of images to classify images better than fully connected neural networks. Our focus is on the role of symmetry in the initialization stage. We show that symmetry-based initialization can be the difference between failure and success. On a high-level, the study of neural networks can be partitioned to three different aspects. Expressiveness Given an architecture, what are the functions it can approximate well? Training Given a network with a "proper" architecture, can the network fit the training data and in a reasonable time? Generalization Given that the training seemed successful, will the true error be small as well? We study these aspects for the first "non trivial" case of neural networks, networks with one hidden layer. We are mostly interested in the initialization phase. If we take a network with the appropriate architecture, we can always initialize it to the desired function. A standard method (that induces a non trivial learning problem) is using random weights to initialize the network. A different reasonable choice is to require the initialization to be useful for an entire class of functions. We follow the latter option. Our focus is on the role of symmetry. We consider the following class of symmetric functions S = S n = n ∑ i=0 a i · 1 |x|=i: a 1,..., a n ∈ {±1}, where x ∈ {0, 1} n and |x| = ∑ i x i. The functions in this class are invariant under arbitrary permutations of the input's coordinates. The parity function π(x) = (−1) |x| and the majority function are well-known examples of symmetric functions. Expressiveness for this class was explored by. They showed that the parity function cannot be represented using a network with limited "connectivity". Contrastingly, if we use a fully connected network with one hidden layer and a common activation function (like sign, sigmoid, or ReLU) only O(n) neurons are needed. We provide such explicit representations for all functions in S; see Lemmas 1 and 2. We also provide useful information on both the training phase and generalization capabilities of the neural network. We show that, with proper initialization, the training process (using standard SGD) efficiently converges to zero empirical error, and that consequently the network has small true error as well. Theorem 1. There exists a constant c > 1 so that the following holds. There exists a network with one hidden layer, cn neurons with sigmoid or ReLU activations, and an initialization such that for all distributions D over X = {0, 1} n and all functions f ∈ S with sample size m ≥ c(n+log(1/δ))/ε, after performing poly(n) SGD updates with a fixed step size h = 1/poly(n) it holds that is the network after training over S. The number of parameters in the network described in Theorem 1 is Ω(n 2). So in general one could expect overfitting when the sample size is as small as O(n). Nevertheless, the theorem provides generalization guarantees, even for such a small sample size. The initialization phase plays an important role in proving Theorem 1. To emphasize this, we report an empirical phenomenon (this is "folklore"). We show that a network cannot learn parity from a random initialization (see Section 5.3). On one hand, if the network size is big, we can bring the empirical error to zero (as suggested in), but the true error is close to 1/2. On the other hand, if its size is too small, the network is not even able to achieve small empirical error (see Figure 5). We observe a similar phenomenon also for a random symmetric function. An open question remains: why is it true that a sample of size polynomial in n does not suffice to learn parity (with random initialization)? A similar phenomenon was theoretically explained by and. The parity function belongs to the class of all parities where · is the standard inner product. This class is efficiently PAC-learnable with O(n) samples using Gaussian elimination. A continuous version of P was studied by and. To study the training phase, they used a generalized notion of statistical queries (SQ); see. In this framework, they show that most functions in the class P cannot be efficiently learned (roughly stated, learning the class requires an exponential amount of resources). This framework, however, does not seem to capture actual training of neural networks using SGD. For example, it is not clear if one SGD update corresponds to a single query in this model. In addition, typically one receives a dataset and performs the training by going over it many times, whereas the query model estimates the gradient using a fresh batch of samples in each iteration. The query model also assumes the noise to be adversarial, an assumption that does not necessarily hold in reality. Finally, the SQ-based lower bound holds for every initialization (in particular, for the initialization we use here), so it does not capture the efficient training process Theorem 1 describes. Theorem 1 shows, however, that with symmetry-based initialization, parity can be efficiently learned. So, in a nutshell, parity can not be learned as part of P, but it can be learned as part of S. One could wonder why the hardness proof for P cannot be applied for S as both classes consist of many input sensitive functions. The answer lies in the fact that P has a far bigger statistical dimension than S (all functions in P are orthogonal to each other, unlike S). The proof of the theorem utilizes the different behavior of the two layers in the network. SGD is performed using a step size h that is polynomially small in n. The analysis shows that in a polynomial number of steps that is independent of the choice of h the following two properties hold: (i) the output neuron reaches a "good" state and (ii) the hidden layer does not change in a "meaningful" way. These two properties hold when h is small enough. In Section 5.2, we experiment with large values of h. We see that, although the training error is zero, the true error becomes large. Here is a high level description of the proof. The neurons in the hidden layer define an "embedding" of the inputs space X = {0, 1} n into R (a.k.a. the feature map). This embedding changes in time according to the training examples and process. The proof shows that if at any point in time this embedding has good enough margin, then training with standard SGD quickly converges. This is explained in more detail in Section 3. It remains an interesting open problem to understand this phenomenon in greater generality, using a cleaner and more abstract language. To better understand the context of our research, we survey previous related works. The expressiveness and limitations of neural networks were studied in several works such as;; and. Constructions of small ReLU networks for the parity function appeared in several previous works, such as , , and. Constant depth circuits for the parity function were also studied in the context of computational complexity theory, see for example , and Håstad. The training phase of neural networks was also studied in many works. Here we list several works that seem most related to ours. analyzed SGD for general neural network architecture and showed that the training error can be nullified, e.g., for the class of bounded degree polynomials (see also). studied neural tangent kernels (NTK), an infinite width analogue of neural networks. showed that randomly initialized shallow ReLU networks nullify the training error, as long as the number of samples is smaller than the number of neurons in the hidden layer. Their analysis only deals with optimization over the first layer (so that the weights of the output neuron are fixed). provided another analysis of the latter two works. Allen-Zhu et al. (2018b) showed that over-parametrized neural networks can achieve zero training error, as as long as the data points are not too close to one another and the weights of the output neuron are fixed. provided guarantees for zero training error, assuming the two classes are separated by a positive margin. Convergence and generalization guarantees for neural networks were studied in the following works. gave data-dependent generalization bounds for GD. All these works optimized only over the hidden layer (the output layer is fixed after initialization). Margins play an important role in learning, and we also use it in our proof. , , and gave generalization bounds for neural networks that are based on their margin when the training ends. From a practical perspective, , and suggested different training algorithms that optimize the margin. As discussed above, it seems difficult for neural networks to learn parities. and demonstrated this using the language statistical queries (SQ). This is a valuable language, but it misses some central aspects of training neural networks. SQ seems to be closely related to GD, but does not seem to capture SGD. SQ also shows that many of the parities functions ⊗ i∈S x i are difficult to learn, but it does not imply that the parity function ⊗ i∈[n] x i is difficult to learn. demonstrated a similar phenomenon in a setting that is closer to the "real life" mechanics of neural networks. We suggest that taking the symmetries of the learning problem into account can make the difference between failure and success. Several works suggested different neural architectures that take symmetries into account; see , , and. Here we describe efficient representations for symmetric functions by networks with one hidden layer. These representations are also useful later on, when we study the training process. We study two different activation functions, sigmoid and ReLU (similar statement can be proved for other activations, like arctan). Each activation function requires its own representation, as in the two lemmas below. We start with the activation σ (ξ) =, since it helps to understand the construction for the ReLU activation. The building blocks of the symmetric functions are indicators of |x| = i for i ∈ {0, 1, . . ., n}. An indicator function is essentially the difference between two sigmoid functions: where A network with one hidden layer of n + 2 neurons with sigmoid activations and one bias neuron is sufficient to represent any function in S. The coefficients of the sigmoid gates are 0, ±1 in this representation. The proofs of this lemma and the subsequent lemmas appear in the appendix. A sigmoid function can be represented using ReLU(ξ) = max{0, ξ} as the difference between two ReLUs Hence, an indicator function can be represented using sign(1 |x|=i − 0.5) = sign(Γ i − 0.5) where The lemma shows that a network with one hidden layer of n + 3 ReLU neurons and one bias neuron is sufficient to represent any function in S. The coefficients of the ReLU gates are 0, ±1, ±2 in this representation. The goal of this section is to describe a small network with one hidden layer that (when initialized properly) efficiently learns symmetric functions using a small number of examples (the training is done via SGD). Here we specify the architecture, initialization and loss function that is implicit in our main (Theorem 1). To guarantee convergence of SGD, we need to start with "good" initial conditions. The initialization we pick depends on the activation function it uses, and is chosen with resemblance to Lemma 2 for ReLU. On a high level, this indicates that understanding the class of functions we wish to study in term of "representation" can be helpful when choosing the architecture of a neural network in a learning context. The network we consider has one hidden layer. We denote by w i j the weight between coordinate j of the input and neuron i in the hidden layer. We denote W this matrix of weights. We denote by b i the bias of neuron i of the hidden layer. We denote B this vector of weights. We denote by m i is the weight from neuron i in the hidden layer to the output neuron. We denote M this vector of weights. We denote by b the bias of the output neuron. Initialize the network as follows: The dimensions of W are (n + 3) × n. For all 1 ≤ i ≤ (n + 3) and 1 ≤ j ≤ n, we set w i j = 1 and b i = −i + 2. We set M = 0 and b = 0. To run SGD, we need to choose a loss function. We use the hinge loss, where v x = ReLU(W x + B) is the output of the hidden layer on input x and β > 0 is a parameter of confidence. A key property in the analysis is the'margin' of the hidden layer with respect to the function being learned. We are interested in the following set V in R d. Recall that W is the weight matrix between the input layer and the hidden layer, and that B is the relevant bias vector. Given W, B, we are interested in the set V = {v x : x ∈ X}, where v x = ReLU(W x + B). In words, we think of the neurons in the hidden layer as defining an "embedding" of X in Euclidean space. A similar construction works for other activation functions. We say that Y: V → {±1} agrees with f ∈ S if for all x ∈ X it holds that The following lemma bounds from below the margin of the initial V. Lemma 3. If Y is a partition that agrees with some function in S for the initialization described above then marg(Y) ≥ Ω(1/ √ n). Proof. By Lemmas 1 and 2, we see that any function in S can be represented with a vector of weights M, b ∈ [−1, 1] Θ(n) of the output neuron together with a bias. These M, b induce a partition we have our desired . Before analyzing the full behavior of SGD, we make an observation: if the weights of the hidden layer are fixed with the initialization described above, then Theorem 1 holds for SGD with batch size 1. This observation, unfortunately, does not suffice to prove Theorem 1. In the setting we consider, the training of the neural network uses SGD without fixing any weights. This more general case is handled in the next section. The rest of this subsection is devoted for explaining this observation. showed that that the perceptron algorithm makes a small number of mistakes for linearly separable data with large margin. For a comprehensive survey of the perceptron algorithm and its variants, see. Running SGD with the hinge loss induces the same update rule as in a modified perceptron algorithm, Algorithm 1. Initialize: Novikoff's proof can be generalized to any β > 0 and batches of any size to yield the following theorem; see; and appendix A. Theorem 2. For Y: V → {±1} with margin γ > 0 and step size h > 0, the modified perceptron algorithm performs at most updates and achieves a margin of at least γβ h 2β h+(Rh) 2, where R = max v∈V v. So, when the weights of the hidden layer are fixed, Lemma 3 implies that the number of SGD steps is at most polynomial in n. When we run SGD on the entire network, the layers interact. For a ReLU network at time t, the update rule for W is as follows. If the network classifies the input x correctly with confidence more than β, no change is made. Otherwise, we change the weights in M by ∆M = yv x h, where y is the true label and h is the step size. If also neuron i of the hidden fired on x, we update its incoming weights by ∆W i,: = ym i xh. These update rules define the following dynamical system: (a) where H is the Heaviside step function and • is the Hadamard pointwise product. A key observation in the proof is that the weights of the last layer ( and) are updated exactly as the modified perceptron algorithm. Another key statement in the proof is that if the network has reached a good representation of the input (i.e., the hidden layer has a large margin), then the interaction between the layers during the continued training does not impair this representation. This is summarized in the following lemma (we are not aware of a similar statement in the literature). Lemma 4. Let M = 0, b = 0, and V = {ReLU(W x + B): x ∈ X} be a linearly separable embedding of X and with margin γ > 0 by the hidden layer of a neural network of depth two with ReLU activation and weights given by W, B. Let R X = max x∈X x, let R = max v∈V v, and 0 < h ≤ γ 5/2 100R 2 R X be the integration step. Assuming R X > 1 and γ ≤ 1, and using β = R 2 h in the loss function, after t SGD iterations the following hold: -Each v ∈ V moves a distance of at most O(R 2 X h 2 Rt 3/2). -The norm M (t) is at most O(Rh √ t). -The training ends in at most O(R 2 /γ 2) SGD updates. Intuitively, this type of lemma can be useful in many other contexts. The high level idea is to identify a "good geometric structure" that the network reaches and enables efficient learning. Proof of Theorem 1. There is an unknown distribution D over the space X. We pick i.i.d. examples S = ((x 1, y 1),..., (x m, y m)) where m ≥ c n+log(1/δ) ε according to D, where y i = f (x i) for some f ∈ S. Run SGD for O(n 4) steps, where the step size is h = O(1/n 6) and the parameter of the loss function is β = R 2 h with R = n 3/2. We claim that it suffices to show that at the end of the training (i) the network correctly classifies all the sample points x 1,..., x m, and (ii) for every x ∈ X such that there exists 1 ≤ i ≤ m with |x| = |x i |, the network outputs y i on x as well. Here is why. The initialization of the network embeds the space X into n + 4 dimensional space (including the bias neuron of the hidden layer). Let V be the initial embedding V = {ReLU(W x + B ): x ∈ X}. Although |X| = 2 n, the size of V is n + 1. The VC dimension of all the boolean functions over V is n + 1. Now, m samples suffice to yield ε true error for an ERM when the VC dimension is n + 1; see e.g. Theorem 6.7 in. It remains to prove (i) and (ii) above. By Lemma 3, at the beginning of the training, the partition of V defined by the target f ∈ S has a margin of γ = Ω(1/ √ n). We are interested in the eventual V * = {ReLU(W * x + B *): x ∈ X} embedding of X as well. The modified perceptron algorithm together with Lemma 4 guarantees that after K ≤ 20R 2 /γ 2 = O(n 4) updates, (M *, b *) separates the embedded sample V * S = {ReLU(W * x i + B *): 1 ≤ i ≤ m} with a margin of at least 0.9γ/3. It remains to prove (ii). Lemma 4 states that as long as less than K = O(n 4) updates were made, the elements in V moved at most O(1/n 2). At the end of the training, the embedded sample V S is separated with a margin of at least 0.9γ/3 with respect to the hyperplane defined by M * and B *. Each v * x for x ∈ X moved at most O(1/n 2) < γ/4. This means that if |x| = |x i | then the network has the same output on x and x i. Since the network has zero empirical error, the output on this x is y i as well. A similar proof is available with sigmoid activation (with better convergence rate and larger allowed step size). Remark. The generalization part of the above proof can be viewed as a consequence of sample compression . Although the eventual network depends on all examples, the proof shows that its functionality depends on at most n + 1 examples. Indeed, after the training, all examples with equal hamming weight have the same label. Remark. The parameter β = R 2 h we chose in the proof may seem odd and negligible. It is a construct in the proof that allows us to bound efficiently the distance that the elements in V have moved during the training. For all practical purposes β = 0 works as well (see Figure 4). We accompany the theoretical with some experiments. We used a network with one hidden layer of 4n + 3 neurons, ReLU activation, and the hinge loss with β = n 3 h. In all the experiments, we used SGD with mini-batch of size one and before each epoch we randomized the sample. We observed similar behavior for larger mini-batches, other activation functions, and other loss functions. The graphs that appear in the appendix A present the training error and the true error 2 versus the epoch of the training process. In all the comparisons below, we chose a random symmetric function and a random sample from X. Figure 2 demonstrates our theoretical and also validates the performance of our initialization. In one setting, we trained only the second layer (freezed the weights of the hidden layer) which essentially corresponds to the perceptron algorithm. In the second setting, we trained both layers with a step size h = n −6 (as the theory suggests). As expected, performance in both cases is similar. We remark that SGD continues to run even after minimizing the empirical error. This happens because of the parameter β > 0. Here we experiment with two parameters in the proof, the step size h and the confidence parameter β. In Figure 3, we used three different step sizes, two of which much larger than the theory suggests. We see that the training error converges much faster to zero, when the step size is larger. This fast convergence comes at the expense of the true error. For a large step size, generalization cease to hold. Setting β = n 3 h is a construct in the proof. Figure 4 shows that setting β = 0 does not impair the performance. The difference between theory (requires β > 0) and practice (allows β = 0) can be explained as follows. The proof bounds the worst-case movement of the hidden layer, whereas in practice an average-case argument suffices. Figure 5 shows that even for n = 20, learning parity is hard from a random initialization. When the sample size is small the training error can be nullified but the true error is large. As the sample grows, it becomes much harder for the network to nullify even the training error. With our initialization, both the training error and true error are minimized quickly. Figure 6 demonstrates the same phenomenon for a random symmetric function. Our initialization also delivers satisfying when the input data it corrupted. In figure 7, we randomly perturb (with probability p = 1 10) the labels and use the same SGD to train the model. In figure 8, we randomly shift every entry of the vectors in the space X by ε that is uniformly distributed in [−0.1, 0.1] n. This work demonstrates that symmetries can play a critical role when designing a neural network. We proved that any symmetric function can be learned by a shallow neural network, with proper initialization. We demonstrated by simulations that this neural network is stable under corruption of data, and that the small step size is the proof is necessary. We also demonstrated that the parity function or a random symmetric function cannot be learned with random initialization. How to explain this empirical phenomenon is still an open question. The works and treated parities using the language of SQ. This language obscures the inner mechanism of the network training, so a more concrete explanation is currently missing. We proved in a special case that the standard SGD training of a network efficiently produces low true error. The general problem that remains is proving similar for general neural networks. A suggestion for future works is to try to identify favorable geometric states of the network that guarantee fast convergence and generalization. Proof. For all k ∈ A and x ∈ X of weight k, the first inequality holds since ∆ i (x) ≥ 0 for all i and x. For all k ∈ A and x ∈ X of weight k, = 2 exp(−2.5)/(1 − exp(−5)) < 0.17; the first equality follows from the definition, the second equality follows from σ (5(x + 0.5)) − σ (5(x − 0.5)) = σ (5(x + 0.5)) + σ (5(−x + 0.5)) − 1 for all x, the first inequality neglects the negative sums, and the second inequality follows because exp(ξ) > σ (ξ) for all ξ. Proof. The proof follows from two observations: For all i ∈ A and x of weight i it holds Γ i (x) = 1. Proof. We are interested in the maximal distance the embedding of an element x ∈ X has moved from its initial embedding: To simplify equations- discussed above, we assume that during the optimization process the norm of the weights W and B grow at a maximal rate: here the norm of a matrix is the 2 -norm. To bound these quantities, we follow the modified perceptron proof and add another quantity to bound. That is, the maximal norm R (t) of the embedded space X at time t satisfies (by assumption R X > 1) we used that the spectral norm of a matrix is at most its 2 -norm. We assume a worst-case where R (t) grows monotonically at a maximal rate. By the modified perceptron algorithm and choice β = R 2 h, Sample of size 10n whose input dimension is n = 30. | When initialized properly, neural networks can learn the simple class of symmetric functions; when initialized randomly, they fail. | 1,262 | scitldr |
Architecture search aims at automatically finding neural architectures that are competitive with architectures designed by human experts. While recent approaches have achieved state-of-the-art predictive performance for image recognition, they are problematic under resource constraints for two reasons: the neural architectures found are solely optimized for high predictive performance, without penalizing excessive resource consumption;most architecture search methods require vast computational resources. We address the first shortcoming by proposing LEMONADE, an evolutionary algorithm for multi-objective architecture search that allows approximating the Pareto-front of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method. We address the second shortcoming by proposing a Lamarckian inheritance mechanism for LEMONADE which generates children networks that are warmstarted with the predictive performance of their trained parents. This is accomplished by using (approximate) network morphism operators for generating children. The combination of these two contributions allows finding models that are on par or even outperform different-sized NASNets, MobileNets, MobileNets V2 and Wide Residual Networks on CIFAR-10 and ImageNet64x64 within only one week on eight GPUs, which is about 20-40x less compute power than previous architecture search methods that yield state-of-the-art performance. Deep learning has enabled remarkable progress on a variety of perceptual tasks, such as image recognition BID12, speech recognition, and machine translation BID0. One crucial aspect for this progress are novel neural architectures BID25; BID7. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automatic architecture search methods . Some of the architectures found in an automated way have already outperformed the best manually-designed ones; however, algorithms such as by BID32;; BID20 BID36 for finding these architectures require enormous computational resources often in the range of thousands of GPU days. Prior work on architecture search has typically framed the problem as a single-objective optimization problem. However, most applications of deep learning do not only require high predictive performance on unseen data but also low resource-consumption in terms of, e.g., inference time, model size or energy consumption. Moreover, there is typically an implicit trade-off between predictive performance and consumption of resources. Recently, several architectures have been manually designed that aim at reducing resource-consumption while retaining high predictive performance BID8 BID22. Automatically found neural architectures have also been down-scaled to reduce resource consumption. However, very little previous work has taken the trade-off between resource-consumption and predictive performance into account during automatic architecture search. In this work, we make the following two main contributions:1. To overcome the need for thousands of GPU days BID32 BID21, we make use of operators acting on the space of neural network architectures that preserve the function a network represents, dubbed network morphisms (; BID27, obviating training from scratch and thereby substantially reducing the required training time per network. This mechanism can be interpreted as Lamarckian inheritance in the context of evolutionary algorithms, where Lamarckism refers to a mechanism which allows passing skills acquired during an individual's lifetime (e.g., by means of learning), on to children by means of inheritance. Since network morphisms are limited to solely increasing a network's size (and therefore likely also resource consumption), we introduce approximate network morphisms (Section 3.2) to also allow shrinking networks, which is essential in the context of multi-objective search. The proposed Lamarckian inheritance mechanism could in principle be combined with any evolutionary algorithm for architecture search, or any other method using (a combination of) localized changes in architecture space.2. We propose a Lamarckian Evolutionary algorithm for Multi-Objective Neural Architecture DEsign, dubbed LEMONADE, Section 4, which is suited for the joint optimization of several objectives, such as predictive performance, inference time, or number of parameters. LEMONADE maintains a population of networks on an approximation of the Pareto front of the multiple objectives. In contrast to generic multi-objective algorithms, LEMONADE exploits that evaluating certain objectives (such as an architecture's number of parameters) is cheap while evaluating the predictive performance on validation data is expensive (since it requires training the model first). Thus, LEMONADE handles its various objectives differently: it first selects a subset of architectures, assigning higher probability to architectures that would fill gaps on the Pareto front for the "cheap" objectives; then, it trains and evaluates only this subset, further reducing the computational resource requirements during architecture search. In contrast to other multi-objective architecture search methods, LEMONADE (i) does not require to define a trade-off between performance and other objectives a-priori (e.g., by weighting objectives when using scalarization methods) but rather returns a set of architectures, which allows the user to select a suitable model a-posteriori; (ii) LEMONADE does not require to be initialized with well performing architectures; it can be initialized with trivial architectures and hence requires less prior knowledge. Also, LEMONADE can handle various search spaces, including complex topologies with multiple branches and skip connections. We evaluate LEMONADE for up to five objectives on two different search spaces for image classification: (i) non-modularized architectures and (ii) cells that are used as repeatable building blocks within an architecture BID31 and also allow transfer to other data sets. LEMONADE returns a population of CNNs covering architectures with 10 000 to 10 000 000 parameters. Within only 5 days on 16 GPUs, LEMONADE discovers architectures that are competitive in terms of predictive performance and resource consumption with hand-designed networks, such as MobileNet V2 BID22, as well as architectures that were automatically designed using 40x greater resources and other multi-objective methods . Multi-objective Optimization Multi-objective optimization BID17 ) deals with problems that have multiple, complementary objective functions f 1,..., f n. Let N be the space of feasible solutions N (in our case the space of feasible neural architectures). In general, multi-objective optimization deals with finding N * ∈ N that minimizes the objectives f 1,..., f n. However, typically there is no single N * that minimizes all objectives at the same time. In contrast, there are multiple Pareto-optimal solutions that are optimal in the sense that one cannot reduce any f i without increasing at least one f j. More formally, a solution N Pareto-dominates another solution DISPLAYFORM0 ). The Pareto-optimal solutions N was recently proposed to frame NAS as a reinforcement learning (RL) problem, where the reward of the RL agent is based on the validation performance of the trained architecture BID1 BID32 BID31 BID19. BID32 use a recurrent neural network to generate a string representing the neural architecture. In a follow-up work, search for cells, which are repeated according to a fixed macro architecture to generate the eventual architecture. Defining the architecture based on a cell simplifies the search space. An alternative to using RL are neuro-evolutionary approaches that use genetic algorithms for optimizing the neural architecture BID24 BID14 BID21 BID18 BID28. In contrast to these works, our proposed method is applicable for multi-objective optimization and employs Lamarckian inheritance, i.e, learned parameters are passed on to a network's offspring. A related approach to our Lamarckian evolution is population-based training BID9, which, however, focuses on hyperparameter optimization and not on the specific properties of the optimization of neural architectures. We note that it would be possible to also include the evolution of hyperparameters in our work. Unfortunately, most of the aforementioned approaches require vast computational resources since they need to train and validate thousands of neural architectures; e.g., BID32 trained over 10.000 neural architectures, requiring thousands of GPU days. One way of speeding up evaluation is to predict performance of a (partially) trained model (; b; BID11 BID13 . Works on performance prediction are complementary to our work and could be incorporated in the future. One-Shot Architecture Search is another promising approach for speeding up performance estimation, which treats all architectures as different subgraphs of a supergraph (the one-shot model) and shares weights between architectures BID23; BID19 BID15 ). Only the weights of a single one-shot model need to be trained, and architectures (which are just subgraphs of the one-shot model) can then be evaluated without any separate training. However, a general limitation of one-shot NAS is that the supergraph defined a-priori restricts the search space to its subgraphs. Moreover, approaches which require that the entire supergraph resides in GPU memory during architecture search will be restricted to relatively small supergraphs. It is also not obvious how one-shot models could be employed for multi-objective optimization as all subgraphs of the one-shot models are of roughly the same size and it is not clear if weight sharing would work for very different-sized architectures. LEMONADE does not suffer from any of these disadvantages; it can handle arbitrary large, unconstrained search spaces while still being efficient.; Cai et al. (2018a) proposed to employ the concept of network morphisms (see Section 3.1). The basic idea is to initialize weights of newly generated neural architectures based on weights of similar, already trained architectures so that they have the same accuracy. This pretrained initialization allows reducing the large cost of training all architectures from scratch. Our work extends this approach by introducing approximate network morphisms, making the use of such operators suitable for multi-objective optimization. Multi-objective Neural Architecture Search Very recently, there has also been some work on multi-objective neural architecture search BID10; BID26 with the goal of not solely optimizing the accuracy on a given task but also considering resource consumption. BID10 parameterize an architecture by a fixed-length vector description, which limits the architecture search space drastically. In parallel, independent work to ours, extend PNAS BID13 by considering multiple objective during the model selection step. However, they employ CondenseNet BID6 as a base network and solely optimize building blocks within the network which makes the search less interesting as (i) the base network is by default already well performing and (ii) the search space is again limited. BID26 use a weighted product method to obtain a scalarized objective. However, this scalarization comes with the drawback of weighting the objectives a-priori, which might not be suitable for certain applications. In contrast to all mentioned work, LEMONADE (i) does not require a complex macro architecture but rather can start from trivial initial networks, (ii) can handle arbitrary search spaces, (iii) does not require to define hard constraints or weights on objectives a-priori. Let N (X) denote a space of neural networks, where each element N ∈ N (X) is a mapping from X ⊂ R n to some other space, e.g., mapping images to labels. A network operator T: DISPLAYFORM0 We now discuss two specific classes of network operators, namely network morphisms and approximate network morphisms. Operators from these two classes will later on serve as mutations in our evolutionary algorithm. Chen et al. FORMULA6 introduced two function-preserving operators for deepening and widening a neural network. BID27 built upon this work, dubbing function-preserving operators on neural networks network morphisms. Formally, a network morphism is a network operator satisfying N w (x) = (T N)w(x) for every x ∈ X, i.e., N w and (T N)w represent the same function. This can be achieved by properly initializingw. We now describe the operators used in LEMONADE and how they can be formulated as a network morphism. We refer to Appendix A.1.1 for details.1. Inserting a Conv-BatchNorm-ReLU block. We initialize the convolution to be an identity mapping, as done by Chen et al. FORMULA6 ("Net2DeeperNet"). Offset and scale of BatchNormalization are initialized to be the (moving) batch mean and (moving) batch variance, hence initially again an identity mapping. Since the ReLU activation is idempotent, i.e., ReLU (ReLU (x)) = ReLU (x), we can add it on top of the previous two operations without any further changes, assuming that the block will be added on top of a ReLU layer.2. Increase the number of filters of a convolution. This operator requires the layer to be changed to have a subsequent convolutional layer, whose parameters are padded with 0's. Alternatively, one could use the "Net2WiderNet" operator by.3. Add a skip connection. We allow skip connection either by concatenation BID7 or by addition . In the former case, we again use zero-padding in sub-sequential convolutional layers. In the latter case, we do not simply add two outputs x and y but rather use a convex combination (1 − λ)x + λy, with a learnable parameter λ initialized as 0 (assuming x is the original output and y the output of an earlier layer). One common property of all network morphisms is that they can only increase the capacity of a network 1. This may be a reasonable property if one solely aims at finding a neural architectures with maximal accuracy, but not if one also aims at neural architectures with low resource requirements. Also, decisions once made can not be reverted. Operators like removing a layer could considerably decrease the resources required by the model while (potentially) preserving its performance. Hence, we now generalize the concept of network morphisms to also cover operators that reduce the capacity of a neural architecture. We say an operator T is an approximate network morphism (ANM) with respect to a neural network N w with parameters w if N w (x) ≈ (T N)w(x) for every x ∈ X. We refer to Appendix A.1.2 for a formal definition. In practice we simply determinew so thatÑ approximates N by using knowledge distillation BID3.In our experiments, we employ the following ANM's: (i) remove a randomly chosen layer or a skip connection, (ii) prune a randomly chosen convolutional layer (i.e., remove 1/2 or 1/4 of its filters), and (iii) substitute a randomly chosen convolution by a depthwise separable convolution. Note that these operators could easily be extended by sophisticated methods for compressing neural networks BID8 BID31. LEMONADE maintains a population of trained networks that constitute a Pareto front in the multi-objective space. Parents are selected from the population inversely proportional to their density. Children are generated by mutation operators with Lamarckian inheritance that are realized by network morphisms and approximate network morphisms. NM operators generate children with the same initial error as their parent. In contrast, children generated with ANM operators may incur a (small) increase in error compared to their parent. However, their initial error is typically still very small. (Right) Only a subset of the generated children is accepted for training. After training, the performance of the children is evaluated and the population is updated to be the Pareto front. Algorithm 1 LEMONADE 1: input: P 0, f, n gen, n pc, n ac 2: P ← P 0 3: for i ← 1,..., n gen do 4: DISPLAYFORM0 Compute parent distribution p P (Eq. 1) BID38: DISPLAYFORM1 Compute children distribution p child (Eq. 2) 8: DISPLAYFORM2 Evaluate f exp for N c ∈ N c ac 10:P ← P aretoF ront(P ∪ N c ac, f) 11: end for 12: return P In this section, we propose a Lamarckian Evolutionary algorithm for MultiObjective Neural Architecture DEsign, dubbed LEMONADE. We refer to FIG1 for an illustration as well as Algorithm 1 for pseudo code. LEMONADE aims at minimizing multiple objectives DISPLAYFORM3 denote expensive-to-evaluate objectives (such as the validation error or some measure only be obtainable by expensive simulation) and its other components f cheap (N) ∈ R n denote cheap-to-evaluate objectives (such as model size) that one also tries to minimize. LEMONADE maintains a population P of parent networks, which we choose to comprise all non-dominated networks with respect to f, i.e., the current approximation of the Pareto front 2. In every iteration of LEMONADE, we first sample parent networks with respect to some probability distribution based on the cheap objectives and generate child networks by applying network operators (described in Section 3). In a second sampling stage, we sample a subset of children, again based on cheap objectives, and solely this subset is evaluated on the expensive objectives. Hence, we exploit that f cheap is cheap to evaluate in order to bias both sampling processes towards areas of f cheap that are sparsely populated. We thereby evaluate f cheap many times in order to end up with a diverse set of children in sparsely populated regions of the objective space, but evaluate f exp only a few times. More specifically, LEMONADE first computes a density estimator p KDE (e.g., in our case, a kernel density estimator) on the cheap objective values of the current population, {f cheap (N)|N ∈ P}. Note that we explicitly only compute the KDE with respect to f cheap rather than f as this allows to evaluate p KDE (f cheap (N)) very quickly. Then, larger number n pc of proposed children N c pc = {N c 1, . . ., N c npc} is generated by applying network operators, where the parent N for each child is sampled according to a distribution inversely proportional to p KDE, DISPLAYFORM4 with a normalization constant c = N ∈P 1/p KDE (f cheap (N)) −1. Since children have similar objective values as their parents (network morphisms do not change architectures drastically), this sampling distribution of the parents is more likely to also generate children in less dense regions of f cheap. Afterwards, we again employ p KDE to sample a subset of n ac accepted children N c ac ⊂ N c pc. The probability of a child being accepted is DISPLAYFORM5 withĉ being another normalization constant. Only these accepted children are evaluated according to f exp. By this two-staged sampling strategy we generate and evaluate more children that have the potential to fill gaps in f. We refer to the ablation study in Appendix A.2.2 for an empirical comparison of this sampling strategy to uniform sampling. Finally, LEMONADE computes the Pareto front from the current generation and the generated children, yielding the next generation. The described procedure is repeated for a prespecified number of generations (100 in our experiments). We present for LEMONADE on searching neural architectures for CIFAR-10. We ran LEMONADE with three different settings: (i) we optimize 5 objectives and search for entire architectures (Section 5.1), (ii) we optimize 2 objectives and search for entire architectures (Appendix A.2), and (iii) we optimize 2 objectives and search for cells (Section 5.2, Appendix A.2). We also transfer the discovered cells from the last setting to ImageNet (Section 5.4) and its down-scaled version ImageNet64x64 (Section 5.3). All experimental details, such as a description of the search spaces and hyperparameters can be found in Appendix A.3.The progress of LEMONADE for setting (ii) is visualized in FIG2. The Pareto front improves over time, reducing the validation error while covering a wide regime of, e.g., model parameters, ranging from 10 000 to 10 000 000. We aim at solving the following multi-objective problem: minimize the five objectives (i) performance on CIFAR-10 (expensive objective), (ii) performance on CIFAR-100 (expensive), (iii) number of parameters (cheap), (iv) number of multiply-add operations (cheap) and (v) inference time 3 (cheap). We think having five objectives is a realistic scenario for most NAS applications. Note that one could easily use other, more sophisticated measures for resource efficiency. In this experiment, we search for entire neural network architectures (denoted as Search Space I, see Appendix A.3.2 for details) instead of convolutional cells (which we will do in a later experiment). LEMONADE natively handles this unconstrained, arbitrarily large search space, whereas other methods are by design a-priori restricted to relatively small search spaces (; BID15 . Also, LEMONADE is initialized with trivial architectures (see Appendix A.3.2) rather than networks that already yield state-of-the-art performance (b;). The set of operators to generate child networks we consider in our experiments are the three network morphism operators (insert convolution, insert skip connection, increase number of filters), as well as the three approximate network morphism operators (remove layer, prune filters, replace layer) described in Section 3. The operators are sampled uniformly at random to generate children. The experiment ran for approximately 5 days on 16 GPUs in parallel. The ing Pareto front consists of approximately 300 neural network architectures. We compare against different-sized NASNets and MobileNets V2 BID22. In order to ensure that differences in test error are actually caused by differences in the discovered architectures rather than different training conditions, we retrained all architectures from scratch using exactly the same optimization pipeline with the same hyperparameters. We do not use stochastic regularization techniques, such as Shake-Shake or ScheduledDropPath in this experiment as they are not applicable to all networks out of the box. The are visualized in FIG3. As one would expect, the performance on CIFAR-10 and CIFAR-100 is highly correlated, hence the ing Pareto fronts only consist of a few elements and differences are rather small (top left). When considering the performance on CIFAR-10 versus the number of parameters (top right) or multiply-add operations (bottom left), LEMONADE is on par with NASNets and MobileNets V2 for resource-intensive models while it outperforms them in the area of very efficient models (e.g., less than 100,000 parameters). In terms of inference time (bottom right), LEMONADE clearly finds models superior to the baselines. We highlight that this has been achieved based on using only 80 GPU days for LEMONADE compared to 2000 in and with a significantly more complex Search Space I (since the entire architecture was optimized and not only a convolutional cell).We refer to Appendix A.2 for an experiment with additional baselines (e.g., random search) and an ablation study. Above, we compared different models when trained with the exact same data augmentation and training pipeline. We now also briefly compare LEMONADE's performance to reported in the literature. We apply two widely used methods to improve over the training pipeline used above: (i) instead of searching for entire architectures, we search for cells that are employed within a hand-crafted macro architecture, meaning one replaces repeating building blocks in the architecture with discovered cells (b;) and (ii) using stochastic regularization techniques, such as ScheduledDropPath during training BID19 b). In our case, we run LEMONADE to search for cells within the ShakeShake macro architecture (i.e., we replace basic convolutional blocks with cells) and also use ShakeShake regularization . Table 1. LEMONADE is on par or outperforms DPP-Net across all parameter regimes. As all other methods solely optimize for accuracy, they do not evaluate models with few parameters. However, also for larger models, LEMONADE is competitive to methods that require significantly more computational resources or start their search with non-trivial architectures (b;). To study the transferability of the discovered cells to a different dataset (without having to run architecture search itself on the target dataset), we built architectures suited for ImageNet64x64 based on five cells discovered on CIFAR-10. We vary the number of cells per block and the number of filters in the last block to obtain different architectures for a single cell (as done by for NASNets). We compare against different sized MobileNets V2, NASNets and Wide Residual Networks (WRNs) BID29. For direct comparability, we again train all architectures in the same way. In FIG4, we plot the Pareto Front from all cells combined, as well as the Pareto Front from a single cell, Cell 2, against the baselines. Both clearly dominate NASNets, WRNs and MobileNets V2 over the entire parameter range, showing that a multi-objective search again is beneficial. We also evaluated one discovered cell, Cell 2, on the regular ImageNet benchmark for the "mobile setting" (i.e., networks with 4M to 6M parameters and less than 600M multiply-add operations). The cell found by LEMONADE achieved a top-1 error of 28.3% and a top-5 error of 9.6%; this is slightly worse than published for, e.g., NASNet (26% and 8.4%, respectively) but still competitive, especially seeing that (due to time and resource constraints), we used an off-the-shelf training pipeline, on a single GPU (for four weeks), and did not alter any hyperparameters. We believe that our cell could perform substantially better with a better optimization pipeline and properly tuned hyperparameters (as in many other NAS papers by authors with more compute resources). We have proposed LEMONADE, a multi-objective evolutionary algorithm for architecture search. The algorithm employs a Lamarckian inheritance mechanism based on (approximate) network morphism operators to speed up the training of novel architectures. Moreover, LEMONADE exploits the fact that evaluating several objectives, such as the performance of a neural network, is orders of magnitude more expensive than evaluating, e.g., a model's number of parameters. Experiments on CIFAR-10 and ImageNet64x64 show that LEMONADE is able to find competitive models and cells both in terms of accuracy and of resource efficiency. We believe that using more sophisticated concepts from the multi-objective evolutionary algorithms literature and using other network operators (e.g., crossovers and advanced compression methods) could further improve LEMONADE's performance in the future. In the following two subsections we give some detailed information on the network morphisms and approximate network morphisms employed in our work. A network morphism is a network operator satisfying the network morphism equation: DISPLAYFORM0 withw i = (w i, A, b). The network morphism equation FORMULA8 then holds for A = 1, b = 0. This morphism can be used to add a fully-connected or convolutional layer, as these layers are simply linear mappings. dubbed this morphism "Net2DeeperNet". Alternatively to the above replacement, one could also choosẽ DISPLAYFORM1 DISPLAYFORM2 A, b are fixed, non-learnable. In this case, network morphism Equation (DISPLAYFORM3 with an arbitrary functionh wh (x). The new parameters arew i = (w i, wh,Ã). Again, Equation can trivially be satisfied by settingà = 0. We can think of two modifications of a neural network that can be expressed by this morphism: firstly, a layer can be widened (i.e., increasing the number of units in a fully connected layer or the number of channels in a CNN -the Net2WiderNet transformation of Chen et al. FORMULA6). Let h(x) be the layer to be widened. For example, we can then seth = h to simply double the width. Secondly, skip-connections by concatenation as used by BID5 can also be expressed. If h(x) itself is a sequence of layers, h(x) = h n (x) • · · · • h 0 (x), then one could chooseh(x) = x to realize a skip from h 0 to the layer subsequent to h n.Network morphism Type III. By definition, every idempotent function N wi i can simply be replaced by DISPLAYFORM4 with the initializationw i = w i. This trivially also holds for idempotent functions without weights, e.g., ReLU.Network morphism Type IV. Every layer N wi i is replaceable bỹ Nw DISPLAYFORM5 with an arbitrary function h and Equation FORMULA8 holds if the learnable parameter λ is initialized as 1. This morphism can be used to incorporate any function, especially any non-linearity. For example, BID27 use a special case of this operator to deal with non-linear, non-idempotent activation functions. Another example would be the insertion of an additive skip connection, which were proposed by to the layer subsequent to N wi n in. Note that every combination of network morphisms again yields a network morphism. Hence, one could, for example, add a block "Conv-BatchNorm-ReLU" subsequent to a ReLU layer by using Equations FORMULA9, FORMULA10 and. LEMONADE essentially consists of three components: (i) additionally using approximate network morphism operators to also allow shrinking architectures, (ii) using Lamarckism, i.e., (approximate) network morphisms, to avoid training from scratch, and (iii) the two-staged sampling strategy. In Figure 6, we present for deactivating each of these components one at a time. The shows that all three components improve LEMONADE's performance. In this section we list all the experimental details. Search Space I corresponds to searching for an entire architecture (rather than cells). LEMONADE's Pareto front was initialized to contain four simple convolutional networks with relatively large validation errors of 30 − 50%. All four initial networks had the following structure: three Conv- Figure 6: Ablation study on CIFAR-10. We deactivate different components of LEMONADE and investigate the impact. LEMONADE default: Performance of LEMONADE as proposed in this work. LEMONADE no ANM: we deactivated the approximate network morphisms operators, i.e., networks can only grow in size. LEMONADE no Lamarckism: all networks are initialized from scratch instead by means of (approximate) network morphisms. LEMONADE no KDE: we deactivate the proposed sampling strategy and use uniform sampling of parents and children instead. BatchNorm-ReLU blocks with intermittent Max-Pooling, followed by a global average pooling and a fully-connected layer with softmax activation. The networks differ in the number of channels in the convolutions, and for further diversity two of them used depthwise-separable convolutions. The models had 15 000, 50 000, 100 000 and 400 000 parameters, respectively. For generating children in LEMONADE, we chose the number of operators that are applied to parents uniformly from {1,2,3}.LEMONADE natively handles this unconstrained, arbitrary large search space, whereas other methods are by design restricted a-priori to relatively small search spaces (; BID15 .We restricted the space of neural architectures such that every architecture must contain at least 3 (depthwise separable) convolutions with a minimum number of filters, which lead to a lower bound on the number of parameters of approximately 10 000.The network operators implicitly define the search space, we do not limit the size of discovered architectures. Search Space II consists of convolutional cells that are used within some macro architecture to build the neural network. In the experiments in Section 5, we use cells within the macro architecture of the Shake-Shake architecture , whereas in the baseline experiment in the appendix (Section A.2), we rely on a simpler scheme as in as in BID13, i.e., sequentially stacking cells. We only choose a single operator to generate children, but the operator is applied to all occurrences of the cell in the architecture. The Pareto Front was again initialized with four trivial cells: the first two cells consist of a single convolutional layer (followed by BatchNorm and ReLU) with F = 128 and F = 256 filters in the last block, respectively. The other two cells consist of a single depthwise separable convolution (followed by BatchNorm and ReLU), again with either F = 128 or F = 256 filters. To classify CIFAR-10 with MobileNets V1 and V2, we replaced three blocks with stride 2 with identical blocks with stride 1 to adapt the networks to the lower spatial resolution of the input. We chose the replaced blocks so that there are the same number of stride 1 blocks between all stride 2 blocks. We varied the size of MobileNets V1 and V2 by varying the width multiplier α ∈ {0.1, 0.2, . . ., 1.2} and NASNets by varying the number of cell per block (∈ {2, 4, 6, 8}) and number of filters (∈ {96, 192, 384, 768, 1536}) in the last block. We apply the standard data augmentation scheme described by BID16, as well as the recently proposed methods mixup BID30 and Cutout . The training set is split up in a training (45.000) and a validation (5.000) set for the purpose of architecture search. We use weight decay (5 · 10 −4) for all models. We use batch size 64 throughout all experiments. During architecture search as well as for generating the random search baseline, all models are trained for 20 epochs using SGD with cosine annealing BID16, decaying the learning rate from 0.01 to 0. For evaluating the test performance, all models are trained from scratch on the training and validation set with the same setup as described above except for 1) we train for 600 epochs and 2) the initial learning rate is set to 0.025. While searching for convolutional cells on CIFAR-10, LEMONADE ran for approximately 56 GPU days. However, there were no significant changes in the Pareto front after approximately 24 GPU days. The training setup (both during architecture search and final evaluation) is exactly the same as before. The training setup on ImageNet64x64 is identical to. Below we list some additional figures. | We propose a method for efficient Multi-Objective Neural Architecture Search based on Lamarckian inheritance and evolutionary algorithms. | 1,263 | scitldr |
We address two challenges of probabilistic topic modelling in order to better estimate the probability of a word in a given context, i.e., P(wordjcontext): No Language Structure in Context: Probabilistic topic models ignore word order by summarizing a given context as a “bag-of-word” and consequently the semantics of words in the context is lost. In this work, we incorporate language structure by combining a neural autoregressive topic model (TM) with a LSTM based language model (LSTM-LM) in a single probabilistic framework. The LSTM-LM learns a vector-space representation of each word by accounting for word order in local collocation patterns, while the TM simultaneously learns a latent representation from the entire document. In addition, the LSTM-LM models complex characteristics of language (e.g., syntax and semantics), while the TM discovers the underlying thematic structure in a collection of documents. We unite two complementary paradigms of learning the meaning of word occurrences by combining a topic model and a language model in a unified probabilistic framework, named as ctx-DocNADE. Limited Context and/or Smaller training corpus of documents: In settings with a small number of word occurrences (i.e., lack of context) in short text or data sparsity in a corpus of few documents, the application of TMs is challenging. We address this challenge by incorporating external knowledge into neural autoregressive topic models via a language modelling approach: we use word embeddings as input of a LSTM-LM with the aim to improve the wordtopic mapping on a smaller and/or short-text corpus. The proposed DocNADE extension is named as ctx-DocNADEe. We present novel neural autoregressive topic model variants coupled with neural language models and embeddings priors that consistently outperform state-of-theart generative topic models in terms of generalization (perplexity), interpretability (topic coherence) and applicability (retrieval and classification) over 6 long-text and 8 short-text datasets from diverse domains. Probabilistic topic models, such as LDA BID1, Replicated Softmax (RSM) and Document Neural Autoregressive Distribution Estimator (DocNADE) variants BID12 BID34 BID15 BID8 are often used to extract topics from text collections, and predict the probabilities of each word in a given document belonging to each topic. Subsequently, they learn latent document representations that can be used to perform natural language processing (NLP) tasks such as information retrieval (IR), document classification or summarization. However, such probabilistic topic models ignore word order and represent a given context as a bag of its words, thereby disregarding semantic information. To motivate our first task of extending probabilistic topic models to incorporate word order and language structure, assume that we conduct topic analysis on the following two sentences: When estimating the probability of a word in a given context (here: P ("bear"|context)), traditional topic models do not account for language structure since they ignore word order within the context and are based on "bag-of-words" (BoWs) only. In this particular setting, the two sentences have the same unigram statistics, but are about different topics. On deciding which topic generated the word "bear" in the second sentence, the preceding words "market falls" make it more likely that it was generated by a topic that assigns a high probability to words related to stock market trading, where "bear territory" is a colloquial expression in the domain. In addition, the language structure (e.g., syntax and semantics) is also ignored. For instance, the word "bear" in the first sentence is a proper noun and subject while it is an object in the second. In practice, topic models also ignore functional words such as "into", which may not be appropriate in some scenarios. Recently, BID23 have shown that a deep contextualized LSTM-based language model (LSTM-LM) is able to capture different language concepts in a layer-wise fashion, e.g., the lowest layer captures language syntax and topmost layer captures semantics. However, in LSTM-LMs the probability of a word is a function of its sentence only and word occurrences are modeled in a fine granularity. Consequently, LSTM-LMs do not capture semantics at a document level. To this end, recent studies such as TDLM BID14, Topic-RNN and TCNLM BID32 have integrated the merits of latent topic and neural language models (LMs); however, they have focused on improving LMs with global (semantics) dependencies using latent topics. Similarly, while bi-gram LDA based topic models BID31 BID33 and n-gram based topic learning BID15 can capture word order in short contexts, they are unable to capture long term dependencies and language concepts. In contrast, DocNADE variants BID12 BID8 ) learns word occurrences across documents i.e., coarse granularity (in the sense that the topic assigned to a given word occurrence equally depends on all the other words appearing in the same document); however since it is based on the BoW assumption all language structure is ignored. In language modeling, BID17 have shown that recurrent neural networks in a significant reduction of perplexity over standard n-gram models. Contribution 1: We introduce language structure into neural autoregressive topic models via a LSTM-LM, thereby accounting for word ordering (or semantic regularities), language concepts and long-range dependencies. This allows for the accurate prediction of words, where the probability of each word is a function of global and local (semantics) contexts, modeled via DocNADE and LSTM-LM, respectively. The proposed neural topic model is named as contextualized-Document Neural Autoregressive Distribution Estimator (ctx-DocNADE) and offers learning complementary semantics by combining joint word and latent topic learning in a unified neural autoregressive framework. For instance, FIG0 (left and middle) shows the complementary topic and word semantics, based on TM and LM representations of the term "fall". Observe that the topic captures the usage of "fall" in the context of stock market trading, attributed to the global (semantic) view. While this is a powerful approach for incorporating language structure and word order in particular for long texts and corpora with many documents, learning from contextual information remains challenging in settings with short texts and few documents, since limited word co-occurrences or little context significant word non-overlap in such short texts and small training corpus of documents lead to little evidence for learning word co-occurrences. However, distributional word representations (i.e. word embeddings) BID22 have shown to capture both the semantic and syntactic relatedness in words and demonstrated impressive performance in NLP tasks. For example, assume that we conduct topic analysis over the two short text fragments: Deal with stock index falls and Brace for market share drops. Traditional topic models with "BoW" assumption will not be able to infer relatedness between word pairs such as (falls, drops) due to the lack of word-overlap and small context in the two phrases. However, in the distributed embedding space, the word pairs are semantically related as shown in FIG0 (left). DISPLAYFORM0 Related work such as BID26 employed web search to improve the information in short texts and BID24 introduced word similarity via thesauri and dictionaries into LDA. BID5 and BID20 integrated word embeddings into LDA and Dirichlet Multinomial Mixture (DMM) BID21 models. Recently, BID8 extends DocNADE by introducing pre-trained word embeddings in topic learning. However, they ignore the underlying language structure, e.g., word ordering, syntax, etc. In addition, DocNADE and its extensions outperform LDA and RSM topic models in terms of perplexity and IR.Contribution 2: We incorporate distributed compositional priors in DocNADE: we use pre-trained word embeddings via LSTM-LM to supplement the multinomial topic model (i.e., DocNADE) in learning latent topic and textual representations on a smaller corpus and/or short texts. Knowing similarities in a distributed space and integrating this complementary information via a LSTM-LM, a topic representation is much more likely and coherent. Taken together, we combine the advantages of complementary learning and external knowledge, and couple topic-and language models with pre-trained word embeddings to model short and long text documents in a unified neural autoregressive framework, named as ctx-DocNADEe. Our approach learns better textual representations, which we quantify via generalizability (e.g., perplexity), interpretability (e.g., topic extraction and coherence) and applicability (e.g., IR and classification).To illustrate our two contributions, we apply our modeling approaches to 7 long-text and 8 short-text datasets from diverse domains and demonstrate that our approach consistently outperforms state-ofthe-art generative topic models. Our learned representations, in a gain of: 4.6% (.790 vs .755) in topic coherence, 6.5% (.615 vs .577) in precision at retrieval fraction 0.02, and 4.4% (.662 vs .634) in F 1 for text classification, averaged over 6 long-text and 8 short-text datasets. When applied to short-text and long-text documents, our proposed modeling approaches generate contextualized topic vectors, which we name textTOvec. The code is available at https: //github.com/pgcool/textTOvec. Generative models are based on estimating the probability distribution of multidimensional data, implicitly requiring modeling complex dependencies. Restricted Boltzmann Machine (RBM) BID9 and its variants BID11 are probabilistic undirected models of binary data. RSM and its variants BID7 are generalization of the RBM, that are used to model word counts. However, estimating the complex probability distribution of the underlying high-dimensional observations is intractable. To address this challenge, NADE BID13 decomposes the joint distribution of binary observations into autoregressive conditional distributions, each modeled using a feed-forward network. Unlike for RBM/RSM, this leads to tractable gradients of the data negative log-likelihood. An extension of NADE and RSM, DocNADE BID12 ) models collections of documents as orderless bags of words (BoW approach), thereby disregarding any language structure. In other words, it is trained to learn word representations reflecting the underlying topics of the documents only, ignoring syntactical and semantic features as those encoded in word embeddings BID0 BID18 BID22 BID23. DocNADE BID15 DISPLAYFORM0, where each autoregressive conditional p(v i |v <i) for the word observation v i is computed using the preceding observations v <i ∈ {v 1, ..., v i−1} in a feed-forward neural network for i ∈ {1, ...D}, DISPLAYFORM1 where g(·) is an activation function, U ∈ R K×H is a weight matrix connecting hidden to output, e ∈ R H and b ∈ R K are bias vectors, W ∈ R H×K is a word representation matrix in which a column W:,vi is a vector representation of the word v i in the vocabulary, and H is the number of hidden units (topics). The log-likelihood of any document v of any arbitrary length is given by: DISPLAYFORM2 Note that the past word observations v <i are orderless due to BoWs, and may not correspond to the words preceding the ith word in the document itself. Input: A training document v Input: Word embedding matrix E Output: log p(v) 1: a ← e 2: q(v) = 1 3: for i from 1 to D do 4:compute hi and p(vi|v<i) 5: Table 1: Computation of hi and p(vi|v<i) in DocNADE, ctx-DocNADE and ctx-DocNADEe models, correspondingly used in estimating log p(v) (Algorithm 1). DISPLAYFORM0 We propose two extensions of the DocNADE model: ctx-DocNADE: introducing language structure via LSTM-LM and ctx-DocNADEe: incorporating external knowledge via pre-trained word embeddings E, to model short and long texts. The unified network(s) account for the ordering of words, syntactical and semantic structures in a language, long and short term dependencies, as well as external knowledge, thereby circumventing the major drawbacks of BoW-based representations. Similar to DocNADE, ctx-DocNADE models each document v as a sequence of multinomial observations. Let [x 1, x 2, ..., x N] be a sequence of N words in a given document, where x i is represented by an embedding vector of dimension, dim. Further, for each element v i ∈ v, let c i = [x 1, x 2, ..., x i−1] be the context (preceding words) of ith word in the document. Unlike in DocNADE, the conditional probability of the word v i in ctx-DocNADE (or ctx-DocNADEe) is a function of two hidden vectors: LSTM-based components of ctx-DocNADE, respectively: DISPLAYFORM0 DISPLAYFORM1 where h In the weight matrix W of DocNADE BID12, each row vector W j,: encodes topic information for the jth hidden topic feature and each column vector W:,vi is a vector for the word v i. To obtain complementary semantics, we exploit this property and expose W to both global and local influences by sharing W in the DocNADE and LSTM-LM componenents. Thus, the embedding layer of LSTM-LM component represents the column vectors. DISPLAYFORM2 ctx-DocNADE, in this realization of the unified network the embedding layer in the LSTM component is randomly initialized. This extends DocNADE by accounting for the ordering of words and language concepts via context-dependent representations for each word in the document.ctx-DocNADEe, the second version extends ctx-DocNADE with distributional priors, where the embedding layer in the LSTM component is initialized by the sum of a pre-trained embedding matrix E and the weight matrix W. Note that W is a model parameter; however E is a static prior. Algorithm 1 and Table 1 show the log p(v) for a document v in three different settings: Doc-NADE, ctx-DocNADE and ctx-DocNADEe. In the DocNADE component, since the weights in the matrix W are tied, the linear activation a can be re-used in every hidden layer and computational complexity reduces to O(HD), where H is the size of each hidden layer. In every epoch, we run an LSTM over the sequence of words in the document and extract hidden vectors h LM i, corresponding to c i for every target word v i. Therefore, the computational complexity in ctx-DocNADE or ctx-DocNADEe is O(HD + N), where N is the total number of edges in the LSTM network BID10 BID27. The trained models can be used to extract a textTOvec representation, i.e., h(v DISPLAYFORM3 ctx-DeepDNEe: DocNADE and LSTM can be extended to a deep, multiple hidden layer architecture by adding new hidden layers as in a regular deep feed-forward neural network, allowing for improved performance. In the deep version, the first hidden layer is computed in an analogous fashion to DocNADE variants (equation 1 or 2). Subsequent hidden layers are computed as: DISPLAYFORM4 where n is the total number of hidden layers (i.e., depth) in the deep feed-forward and LSTM networks. For d=1, the hidden vectors h DN i,1 and h LM i,1 correspond to equations 1 and 2. The conditional p(v i = w|v <i) is computed using the last layer n, i.e., h i,n = h TAB10: State-of-the-art comparison: IR (i.e, IR-precision at 0.02 fraction) and classification F 1 for short texts, where Avg: average over the row values, the bold and underline: the maximum for IR and F1, respectively. DISPLAYFORM5 We apply our modeling approaches (in improving topic models, i.e, DocNADE using language concepts from LSTM-LM) to 8 short-text and 7 long-text datasets of varying size with single/multiclass labeled documents from public as well as industrial corpora. We present four quantitative measures in evaluating topic models: generalization (perplexity), topic coherence, text retrieval and categorization. See the appendices for the data description and example texts. TAB1 shows the data statistics, where 20NS and R21578 signify 20NewsGroups and Reuters21578, respectively. Baselines: While, we evaluate our multi-fold contributions on four tasks: generalization (perplexity), topic coherence, text retrieval and categorization, we compare performance of our proposed models ctx-DocNADE and ctx-DocNADEe with related baselines based on: word representation: glove BID22, where a document is represented by summing the embedding vectors of it's words, document representation: doc2vec , LDA based BoW TMs: ProdLDA and SCHOLAR 1 BID3 ) neural BoW TMs: DocNADE and NTM BID2 and, TMs, including pre-trained word embeddings: Gauss-LDA (GaussianLDA) BID5, and glove-DMM, glove-LDA BID20. jointly 2 trained topic and language models: TDLM , Topic-RNN and TCNLM BID32. Experimental Setup: DocNADE is often trained on a reduced vocabulary (RV) after pre-processing (e.g., ignoring functional words, etc.); however, we also investigate training it on full text/vocabulary (FV) (TAB1) and compute document representations to perform different evaluation tasks. The FV setting preserves the language structure, required by LSTM-LM, and allows a fair comparison of DocNADE+FV and ctx-DocNADE variants. We use the glove embedding of 200 dimensions. All the baselines and proposed models (ctx-DocNADE, ctx-DocNADEe and ctx-DeepDNEe) were run in the FV setting over 200 topics to quantify the quality of the learned representations. To better initialize the complementary learning in ctx-DocNADEs, we perform a pre-training for 10 epochs with λ set to 0. See the appendices for the experimental setup and hyperparameters for the following tasks, including the ablation over λ on validation set. BID14 for all the short-text datasets to evaluate the quality of representations learned in the spare data setting. For a fair comparison, we set 200 topics and hidden size, and initialize with the same pre-trained word embeddings (i.e., glove) as used in the ctx-DocNADEe. DISPLAYFORM0 To evaluate the generative performance of the topic models, we estimate the log-probabilities for the test documents and compute the average held-out perplexity (P P L) per word as, P P L = exp − 1 z z t=1 DISPLAYFORM0, where z and |v t | are the total number of documents and words in a document v t. For DocNADE, the log-probability log p(v t) is computed using L DN (v); however, we ignore the mixture coefficient, i.e., λ=0 (equation 2) to compute the exact log-likelihood in ctx-DocNADE versions. The optimal λ is determined based on the validation set. TAB5 quantitatively shows the PPL scores, where the complementary learning with λ = 0.01 (optimal) in ctx-DocNADE achieves lower perplexity than the baseline DocNADE for both short and long texts, e.g., (822 vs 846) and (1358 vs 1375) on AGnewstitle and 20NS 4 datasets, respectively in the FV setting. We compute topic coherence BID4 BID19 BID7 to assess the meaningfulness of the underlying topics captured. We choose the coherence measure proposed by BID25, which identifies context features for each topic word using a sliding window over the reference corpus. Higher scores imply more coherent topics. We use the gensim module (radimrehurek.com/gensim/models/coherencemodel.html, coherence type = c v) to estimate coherence for each of the 200 topics (top 10 and 20 words). TAB7 shows average coherence over 200 topics, where the higher scores in ctx-DocNADE compared to DocNADE (.772 vs .755) suggest that the contextual information and language structure help in generating more coherent topics. The introduction of embeddings in ctx-DocNADEe boosts the topic coherence, leading to a gain of 4.6% (.790 vs .755) on average over 11 datasets. Note that the proposed models also outperform the baselines methods glove-DMM and glove-LDA. Qualitatively, Table 8 illustrates an example topic from the 20NSshort text dataset for DocNADE, ctx-DocNADE and ctx-DocNADEe, where the inclusion of embeddings in a more coherent topic. Additional Baslines: We further compare our proposed models to other approaches that combining topic and language models, such as TDLM , Topic-RNN and TCNLM BID32. However, the related studies focus on improving language models using topic models: in contrast, the focus of our work is on improving topic models for textual representations (short-text or long-text documents) by incorporating language concepts (e.g., word ordering, syntax, semantics, etc.) and external knowledge (e.g., word embeddings) via neural language models, as discussed in section 1.To this end, we follow the experimental setup of the most recent work, TCNLM and quantitatively compare the performance of our models (i.e., ctx-DocNADE and ctx-DocNADEe) in terms of topic coherence (NPMI) on BNC dataset. The sliding window is one of the hyper-parameters for computing topic coherence BID25 BID32. A sliding window of 20 is used in TCNLM; in addition we also present for a window of size 110. λ is the mixture weight of the LM component in the topic modeling process, and (s) and (l) indicate small and large model, respectively. The symbol'-' indicates no , since word embeddings of 150 dimensions are not available from glove vectors. (Right): The top 5 words of seven learnt topics from our models and TCNLM. The asterisk (*) indicates our proposed models and (#) taken from TCNLM BID32.ues of λ illustrates the relevance of the LM component for topic coherence (DocNADE corresponds to λ=0). Similarly, the inclusion of word embeddings (i.e., ctx-DocNADEe) in more coherent topics than the baseline DocNADE. Importantly, while ctx-DocNADEe is motivated by sparse data settings, the BNC dataset is neither a collection of short-text nor a corpus of few documents. Consequently, ctx-DocNADEe does not show improvements in topic coherence over ctx-DocNADE.In TAB8 (right), we further qualitatively show the top 5 words of seven topics (topic name summarized by BID32) from TCNML and our models. Observe that ctx-DocNADE captures a topic expression that is a collection of only verbs in the past participle. Since the BNC dataset is unlabeled, we are here restricted to comparing model performance in terms of topic coherence only. Text Retrieval: We perform a document retrieval task using the short-text and long-text documents with label information. We follow the experimental setup similar to BID15, where all test documents are treated as queries to retrieve a fraction of the closest documents in the original training set using cosine similarity measure between their textTOvec representations (section 2.2). To compute retrieval precision for each fraction (e.g., 0.0001, 0.005, 0.01, 0.02, 0.05, etc.), we average the number of retrieved training documents with the same label as the query. For multi-label datasets, we average the precision scores over multiple labels for each query. Since, BID28 and BID15 have shown that RSM and DocNADE strictly outperform LDA on this task, we solely compare DocNADE with our proposed extensions. TAB10 and 4 show the retrieval precision scores for the short-text and long-text datasets, respectively at retrieval fraction 0.02. Observe that the introduction of both pre-trained embeddings and language/contextual information leads to improved performance on the IR task noticeably for short texts. We also investigate topic modeling without pre-processing and filtering certain words, i.e. the FV setting and find that the DocNADE(FV) or glove(FV) improves IR precision over the baseline RV setting. Therefore, we opt for the FV in the proposed extensions. On an average over the 8 shorttext and 6 long-text datasets, ctx-DocNADEe reports a gain of 7.1% (.630 vs .588) (in IR-precision. In addition, the deep variant (d=3) with embeddings, i.e., ctx-DeepDNEe shows competitive performance on TREC6 and Subjectivity datasets. FIG4, 3d, 3e and 3f) illustrate the average precision for the retrieval task on 6 datasets. Observe that the ctx-DocNADEe outperforms DocNADE(RV) at all the fractions and demonstrates a gain of 6.5% (.615 vs .577) in precision at fraction 0.02, averaged over 14 datasets. Additionally, our proposed models outperform TDLM and ProdLDA 6 (for 20NS) by noticeable margins. We perform text categorization to measure the quality of our textTovec representations. We consider the same experimental setup as in the retrieval task and extract textTOvec of 200 dimension for each document, learned during the training of ctx-DocNADE variants. To perform text categorization, we employ a logistic regression classifier with L2 regularization. While, ctx-DocNADEe and ctx-DeepDNEe make use of glove embeddings, they are evaluated against the topic model baselines with embeddings. For the short texts TAB10, the glove leads DocNADE in classification performance, suggesting a need for distributional priors in the topic model. Therefore, the ctx-DocNADEe reports a gain of 4.8% (.705 vs .673) and 3.6%(.618 vs .596) in F 1, compared to DocNADE(RV) on an average over the short TAB10 and long TAB4 texts, respectively. In , a gain of 4.4% (.662 vs .634) overall. In terms of classification accuracy on 20NS dataset, the scores are: DocNADE (0.734), ctxDocNADE (0.744), ctx-DocNADEe (0.751), NTM (0.72) and SCHOLAR (0.71). While, our proposed models, i.e., ctx-DocNADE and ctx-DocNADEe outperform both NTM ( taken from BID2, FIG1) and SCHOLAR ( taken from BID3, TAB1), the DocNADE establishes itself as a strong neural topic model baseline. To further interpret the topic models, we analyze the meaningful semantics captured via topic extraction. Table 8 shows a topic extracted using 20NS dataset that could be interpreted as computers, which are (sub)categories in the data, confirming that meaningful topics are captured. Observe that the ctx-DocNADEe extracts a more coherent topic due to embedding priors. To qualitatively inspect the contribution of word embeddings and textTOvec representations in topic models, we analyse the text retrieved for each query using the representations learned from DocNADE and ctxDoocNADEe models. Table 9 illustrates the retrieval of the top 3 texts for an input query, selected from TMNtitle dataset, where #match is YES if the query and retrievals have the same class label. Observe that ctx-DocNADEe retrieves the top 3 texts, each with no unigram overlap with the query. vga, screen, computer, color, svga, graphics computer, sell, screen, offer, bar, macintosh, color, powerbook, vga, card, san, windows, sold, cars, terminal, forsale, utility, monitor, svga, offer gov, vesa computer, processor.554.624.667 Table 8: A topic of 20NS dataset with coherence -DocNADEe Query:: "emerging economies move ahead nuclear plans" #match ctx-#IR1:: imf sign lifting japan yen YES #IR2:: japan recovery takes hold debt downgrade looms YES #IR3:: japan ministers confident treasuries move YES DocNADE #IR1:: nuclear regulator back power plans NO #IR2:: defiant iran plans big rise nuclear NO #IR3:: japan banks billion nuclear operator sources YES Table 9: Illustration of the top-3 retrievals for an input query Additionally, we show the quality of representations learned at different fractions (20%, 40%, 60%, 80%, 100%) of training set from TMNtitle data and use the same experimental setup for the IR and classification tasks, as in section 3.3. In FIG5, we quantify the quality of representations learned and demonstrate improvements due to the proposed models, i.e., ctx-DocNADE and ctx-DocNADEe over DocNADE at different fractions of the training data. Observe that the gains in both the tasks are large for smaller fractions of the datasets. For instance, one of the proposed models, i.e., ctxDocNADEe (vs DocNADE) reports: a precision (at 0.02 fraction) of 0.580 vs 0.444 at 20% and 0.595 vs 0.525 at 100% of the training set, and an F1 of 0.711 vs 0.615 at 20% and 0.726 vs 0.688 at 100% of the training set. Therefore, the findings conform to our second contribution of improving topic models with word embeddings, especially in the sparse data setting. DISPLAYFORM0 In this work, we have shown that accounting for language concepts such as word ordering, syntactic and semantic information in neural autoregressive topic models helps to better estimate the probability of a word in a given context. To this end, we have combined a neural autoregressive topic-(i.e., DocNADE) and a neural language (e.g., LSTM-LM) model in a single probabilistic framework with an aim to introduce language concepts in each of the autoregressive steps of the topic model. This facilitates learning a latent representation from the entire document whilst accounting for the local dynamics of the collocation patterns, encoded in the internal states of LSTM-LM. We further augment this complementary learning with external knowledge by introducing word embeddings. Our experimental show that our proposed modeling approaches consistently outperform stateof-the-art generative topic models, quantified by generalization (perplexity), topic interpretability (coherence), and applicability (text retrieval and categorization) on 15 datasets. Label: training Instructors shall have tertiary education and experience in the operation and maintenance of the equipment or sub-system of Plant. They shall be proficient in the use of the English language both written and oral. They shall be able to deliver instructions clearly and systematically. The curriculum vitae of the instructors shall be submitted for acceptance by the Engineer at least 8 weeks before the commencement of any training. Label: maintenance The Contractor shall provide experienced staff for 24 hours per Day, 7 Days per week, throughout the Year, for call out to carry out On-call Maintenance for the Signalling System. Label: cables Unless otherwise specified, this standard is applicable to all cables which include single and multi-core cables and wires, Local Area Network (LAN) cables and Fibre Optic (FO) cables. Label: installation The Contractor shall provide and permanently install the asset labels onto all equipment supplied under this Contract. The Contractor shall liaise and co-ordinate with the Engineer for the format and the content of the labels. The Contractor shall submit the final format and size of the labels as well as the installation layout of the labels on the respective equipment, to the Engineer for acceptance. Label: operations, interlocking It shall be possible to switch any station Interlocking capable of reversing the service into "Auto-Turnaround Operation". This facility once selected shall automatically route Trains into and out of these stations, independently of the ATS system. At stations where multiple platforms can be used to reverse the service it shall be possible to select one or both platforms for the service reversal. TAB10: Perplexity scores for different λ in Generalization task: Ablation over validation set labels are not used during training. The class labels are only used to check if the retrieved documents have the same class label as the query document. To perform document retrieval, we use the same train/development/test split of documents discussed in data statistics (experimental section) for all the datasets during learning. See TAB1 for the hyperparameters in the document retrieval task. We used gensim (https://github.com/RaRe-Technologies/gensim) to train Doc2Vec models for 12 datasets. Models were trained with distributed bag of words, for 1000 iterations using a window size of 5 and a vector size of 500. We used the same split in training/development/test as for training the Doc2Vec models (also same split as in IR task) and trained a regularized logistic regression classifier on the inferred document vectors to predict class labels. In the case of multilabel datasets (R21578,R21578title, RCV1V2), we used a one-vs-all approach. Models were trained with a liblinear solver using L2 regularization and accuracy and macro-averaged F1 score were computed on the test set to quantify predictive power. We used LFTM (https://github.com/datquocnguyen/LFTM) to train glove-DMM and glove-LDA models. Models were trained for 200 iterations with 2000 initial iterations using 200 topics. For short texts we set the hyperparameter beta to 0.1, for long texts to 0.01; the mixture parameter lambda was set to 0.6 for all datasets. The setup for the classification task was the same as for doc2vec; classification was performed using relative topic proportions as input (i.e. we inferred the topic distribution of the training and test documents and used the relative distribution as input Topic coherence (NPMI) using 20 topics: DocNADE (.18) and SCHOLAR (.35), i.e., SCHOLAR BID3 generates more coherence topics than DocNADE, though worse in PPL and text classification (see section 3.3) than DocNADE, ctx-DocNADE and ctx-DocNADEe. IR tasks: Since, SCHOLAR BID3 without meta-data equates to ProdLDA and we have shown in section 3.3 that ProdLDA is worse on IR tasks than our proposed models, therefore one can infer the performance of SCHOLAR on IR task. The experimental above suggest that the DocNADE is better than SCHOLAR in generating good representations for downstream tasks such as information retrieval or classification, however falls behind SCHOLAR in interpretability. The investigation opens up an interesting direction for future research. | Unified neural model of topic and language modeling to introduce language structure in topic models for contextualized topic vectors | 1,264 | scitldr |
The large memory requirements of deep neural networks strain the capabilities of many devices, limiting their deployment and adoption. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496x; with the same model accuracy, this in up to a 1.51x improvement over the state-of-the-art. The continued success of deep neural networks (DNNs) comes with increasing demands on compute, memory, and networking resources. Moreover, the correlation between model size and accuracy suggests that tomorrow's networks will only grow larger. This growth presents a challenge for resource-constrained platforms such as mobile phones and wireless sensors. As new hardware now enables executing DNN inferences on these devices BID0 ), a practical issue that remains is reducing the burden of distributing the latest models especially in regions of the world not using high-bandwidth networks. For instance, it is estimated that, globally, 800 million users will be using 2G networks by 2020 BID11, which can take up to 30 minutes to download just 20MB of data. By contrast, today's DNNs are on the order of tens to hundreds of MBs, making them difficult to distribute. In addition to network bandwidth, storage capacity on resource-constrained devices is limited, as more applications look to leverage DNNs. Thus, in order to support state-of-the-art deep learning methods on edge devices, methods to reduce the size of DNN models without sacrificing model accuracy are needed. Model compression is a popular solution for this problem. A variety of compression algorithms have been proposed in recent years and many exploit the intrinsic redundancy in model weights. Broadly speaking, the majority of this work has focused on ways of simplifying or eliminating weight values (e.g., through weight pruning and quantization), while comparatively little effort has been spent on devising techniques for encoding and compressing. In this paper we propose a novel lossy encoding method, Weightless, based on Bloomier filters, a probabilistic data structure BID5. Bloomier filters inexactly store a function map, and by adjusting the filter parameters, we can elect to use less storage space at the cost of an increasing chance of erroneous values. We use this data structure to compactly encode the weights of a neural network, exploiting redundancy in the weights to tolerate some errors. In conjunction with existing weight simplification techniques, namely pruning and clustering, our approach dramatically reduces the memory and bandwidth requirements of DNNs for over the wire transmission and ondevice storage. Weightless demonstrates compression rates of up to 496× without loss of accuracy, improving on the state of the art by up to 1.51×. Furthermore, we show that Weightless scales better with increasing sparsity, which means more sophisticated pruning methods yield even more benefits. This work demonstrates the efficacy of compressing DNNs with lossy encoding using probabilistic data structures. Even after the same aggressive lossy simplification steps of weight pruning and clustering (see Section 2), there is still sufficient extraneous information left in model weights to allow an approximate encoding scheme to substantially reduce the memory footprint without loss of model accuracy. Section 3 reviews Bloomier filters and details Weightless. State-of-the-art compression using Weightless are presented in Section 4. Finally, in Section 4.3 shows that Weightless scales better as networks become more sparse compared to the previous best solution. Our goal is to minimize the static memory footprint of a neural network without compromising accuracy. Deep neural network weights exhibit ample redundancy, and a wide variety of techniques have been proposed to exploit this attribute. We group these techniques into two categories: methods that modify the loss function or structure of a network to reduce free parameters and methods that compress a given network by removing unnecessary information. The first class of methods aim to directly train a network with a small memory footprint by introducing specialized structure or loss. Examples of specialized structure include low-rank, structured matrices of BID25 and randomly-tied weights of BID6. Examples of specialized loss include teacher-student training for knowledge distillation BID3 BID16 and diversity-density penalties BID27. These methods can achieve significant space savings, but also typically require modification of the network structure and full retraining of the parameters. An alternative approach, which is the focus of this work, is to compress an existing, trained model. This exploits the fact that most neural networks contain far more information than is necessary for accurate inference BID9. This extraneous information can be removed to save memory. Much prior work has explored this opportunity, generally by applying a two-step process of first simplifying weight matrices and then encoding them in a more compact form. Simplification changes the number or characteristic of weight values to reduce the information needed to represent them. For example, pruning by selectively zeroing weight values BID18 BID12 can, in some cases, eliminate over 99% of the values without penalty. Similarly, most models do not need many bits of information to represent each weight. Quantization collapses weights to a smaller set of unique values, for instance via reduction to fixed-point binary representations BID13 or clustering techniques BID10.Simplifying weight matrices can further enable the use of more compact encoding schemes, improving compression. For example, two recent works BID14; BID7 encode pruned and quantized DNNs with sparse matrix representations. In both works, however, the encoding step is a lossless transformation, applied on top of lossy simplification. Weightless is a lossy encoding scheme based around Bloomier filters. We begin by describing what a Bloomier filter is, how to construct one, and how to retrieve values from it. We then show how we encode neural network weights using this data structure and propose a set of augmentations to make it an effective compression strategy for deep neural networks. A Bloomier filter generalizes the idea of a Bloom filter BID1, which are data structures that answer queries about set membership. Given a subset S of a universe U, a Bloom filter answers queries of the form, "Is v ∈ S?". If v is in S, the answer is always yes; if v is not in S, there is some probability of a false positive, which depends on the size of the filter, as size is proportional to encoding strength. By allowing false positives, Bloom filters can dramatically reduce the space needed to represent the set. A Bloomier filter BID5 ) is a similar data structure but instead encodes a function. For each v in a domain S, the function has an associated value f (v) in the range R = [0, 2 r). Given an input v, a Bloomier filter always returns f (v) when v is in S. When v is not in S, the Bloomier filter returns a null value ⊥, except that some fraction of the time there is a "false positive", and the Bloomier filter returns an incorrect, non-null value in the range R. W is an inexact reconstruction of W from a compressed projection X. To retrieve the value w i,j, we hash its location and exclusive-or the corresponding entries of X together with a computed mask M. If the ing value falls within the range [0, 2 r−1), it is used for w i,j, otherwise, it is treated as a zero. The red path on the left shows a false positive due to collisions in X and random M value. Decoding Let S be the subset of values in U to store, with |S| = n. A Bloomier filter uses a small number of hash functions (typically four), and a hash table X of m = cn cells for some constant c (1.25 in this paper), each holding t > r bits. For hash functions DISPLAYFORM0 Hence, to find the value of f (v), hash v four times, perform three table lookups, and exclusive-or together the four values. Like the Bloom filter, querying a Bloomier filter runs in O time. For u / ∈ S, the , DISPLAYFORM1 will be uniform over all t-bit values. If this is not in [0, 2 r), then ⊥ is returned and if it happens to land in [0, 2 r), a false positive occurs and a is (incorrectly) returned. An incorrect value is therefore returned with probability 2 r−t.Encoding Constructing a Bloomier filter involves finding values for X such that the relationship above holds for all values in S. There is no known efficient way to do so directly. All published approaches involve searching for configurations with randomized algorithms. In their paper introducing Bloomier filters, BID5 give a greedy algorithm which takes O(n log n) time and produces a table of size cn t bits with high probability. BID4 provide two slightly better constructions. First, they give a method with identical space requirements but runs in O(n) time. They also show a separate O(n log n)-time algorithm for producing a smaller table with c closer to 1. Using a more sophisticated algorithm for construction should allow for a more compact table and, by extension, improve the overall compression rate. However, we leave this for future work and use the method of BID5 for simplicity. While construction can be expensive, it is a one-time cost. Moreover, the absolute runtime is small compared to the time it takes to train a deep neural network. In the case of VGG-16, our unoptimized Python code built a Bloomier filter within an hour. We see this as a small worthwhile overhead given the savings offered and in contrast to the days it can take to fully train a network. We propose using the Bloomier filter to compactly store weights in a neural network. The function f encodes the mapping between indices of nonzero weights to their corresponding values. Given a weight matrix W, define the domain S to be the set of indices {i, j | w i,j = 0}. Likewise, the range R is [−2 a−1, 2 a−1) − {0} for a such that all values of W fall within the interval. Due to weight value clustering (see below) this range is remapped to [0, 2 r−1) and encodes the cluster indices. A null response from the filter means the weight has a value of zero. Once f is encoded in a filter, an approximation W of the original weight matrix is reconstructed by querying it with all indices. The original nonzero elements of W are preserved in the approximation, as are most of the zero elements. A small subset of zero-valued weights in W will take on nonzero values as a of random collisions in X, possibly changing the model's output. FIG0 illustrates the operation of this scheme: An original nonzero is correctly recalled from the filter on the right and a false positive is created by an erroneous match on the left (red).Complementing Bloomier filters with simplification Because the space used by a Bloomier filter is O(nt), they are especially useful under two conditions: The stored function is sparse (small n, with respect to |U |) and It has a restricted range of output values (small r, since t > r). To improve overall compression, we pair approximate encoding with weight transformations. Pruning networks to enforce sparsity (condition 1) has been studied extensively BID15 BID18. In this paper, we consider two different pruning techniques: (i) magnitude threshold plus retraining and (ii) dynamic network surgery (DNS) BID12. Magnitude pruning with retraining as straightforward to use and offers good . DNS is a recently proposed technique that prunes the network during training. We were able to acquire two sets of models, LeNet-300-100 and LeNet5, that were pruned using DNS and include them in our evaluation; as no reference was published for VGG-16 only magnitude pruning is used. Regardless of how it is accomplished, improving sparsity will reduce the overall encoding size linearly with the number of non-zeros with no effect on the false positive rate (which depends only on r and t). The reason for using two methods is to demonstrate the benefits of Weightless as networks increase in sparsity, the DNS networks are notably more sparse than the same networks using magnitude pruning. Reducing r (condition 2) amounts to restricting the range of the stored function or minimizing the number of bits required to represent weight values. Though many solutions to discretize weights exist (e.g., limited binary precision and advanced quantization techniques BID7), we use k-means clustering. After clustering the weight values, the k centroids are saved into an auxiliary table and the elements of W are replaced with indices into this table. This style of indirect encoding is especially well-suited to Bloomier filters, as these indices represent a small, contiguous set of integers. Another benefit of using Bloomier filters is that k does not have to be a power of 2. When decoding Bloomier filters, the of the XORs can be checked with an inequality, rather than a bitmask. This allows Bloomier filters to use k exactly, reducing the false positive rate by a factor of 1 − k 2 r. In other methods, like that of BID14, there is no benefit, as any k not equal to a power of two strictly wastes space. Tuning the t hyperparameter The use of Bloomier filters introduces an additional hyperparameter t, the number of bits per cell in the Bloomier filter. t trades off the Bloomier filter's size and the false positive rate which, in turn, effects model accuracy. While t needs to be tuned, we find it far easier to Figure 3: Weightless operates layer-by-layer, alternating between simplification and lossy encoding. Once a Bloomier filter is constructed for a weight matrix, that layer is frozen and the subsequent layers are briefly retrained (only a few epochs are needed).reason about than other DNN hyperparameters. Because we encode k clusters, t must be greater than log 2 k, and each additional t bit reduces the number of false positives by a factor of 2. This limits the number of reasonable values for t: when t is too low the networks experience substantial accuracy loss, but also do not benefit from high values of t because they have enough implicit resilience to handle low error rates (see FIG1). Experimentally we find that t typically falls in the range of 6 to 9 for our models. Retraining to mitigate the effects of false positives We encode each layer's weights sequentially. Because the weights are fixed, the Bloomier filter's false positives are deterministic. This allows for the retraining of deeper network layers to compensate for errors. It is important to note that encoded layers are not retrained. The randomness of the Bloomier filter would sacrifice all benefits of mitigating the effects of errors. If the encoded layer was retrained, a new encoding would have to be constructed (because S changes) and the indices of weights that in false positives would differ after every iteration of retraining. Instead, we find retraining all subsequent layers to be effective, typically allowing us to reduce t by one or two bits. The process of retraining around faults is relatively cheap, requiring on the order of tens of epochs to converge. The entire optimization pipeline is shown in Figure 3.Compressing Bloomier filters When sending weight matrices over a network or saving them to disk, it is not necessary to retain the ability to access weight values as they are being sent, so it is advantageous to add another layer of compression for transmission. We use arithmetic coding, an entropy-optimal stream code which exploits the distribution of values in the table BID20. Because the nonzero entries in a Bloomier filter are, by design, uniformly distributed values in [1, 2 t − 1), improvements from this stage largely come from the prevalence of zero entries. We evaluate Weightless on three deep neural networks commonly used to study compression: LeNet-300-100, LeNet5 , and VGG-16 BID24. The LeNet networks use MNIST (Lecun & Cortes) and VGG-16 uses ImageNet BID23. The networks are trained and tested in Keras BID8. The Bloomier filter was implemented in house and uses a Mersenne Twister pseudorandom number generator for uniform hash functions. To reduce the cost of constructing the filters for VGG-16, we shard the non-zero weights into ten separate filters, which are built in parallel to reduce construction time. Sharding does not significantly affect compression or false positive rate as long as the number of shards is small BID2.We focus on applying Weightless to the largest layers in each model, as shown in TAB1. This corresponds to the first two fully-connected layers of LeNet-300-100 and VGG-16. For LeNet5, the second convolutional layer and the first fully-connected layer are the largest. These layers account for 99.6%, 99.7%, and 86% of the weights in LeNet5, LeNet-300-100, and VGG-16, respectively. (The DNS version is slightly different than magnitude pruning, however the trend is the same.) While other layers could be compressed, they offer diminishing returns. The below present both the absolute compression ratio and the improvements Weightless achieves relative to Deep Compression BID14, which represents the current state-of-the-art. The absolute compression ratio is with respect the original standard 32-bit weight size of the models. This baseline is commonly used to report in publications and, while larger than many models used in practice, it provides the complete picture for readers to draw their own . For comparison to a more aggressive baseline, we reimplemented Deep Compression in Keras. Deep Compression implements a lossless optimization pipeline where pruned and clustered weights are encoded using compressed sparse row encoding (CSR) and then compresses CSR encoding tables with Huffman coding. The compression achieved by Deep Compression we use as a baseline is notably better than the original publication (e.g., VGG-16 FC-0 went from 91× to 119×).Error baseline Because Weightless performs lossy compression, it is important to bound the impact of the loss. We establish this bound as the error of the trained network after the simplification steps (i.e., post pruning and clustering). In doing so, the test errors from compressing with Weightless and Deep Compression are the same (shown as Baseline Error % in TAB1). Weightless is sometimes slightly better due to training fluctuations, but never worse. While Weightless does provide a tradeoff between compression and model accuracy, we do not consider it here. Instead, we feel the strongest case for this method is to compare against a lossless technique with iso-accuracy and note compression ratio will only improve in any use case where degradation in model accuracy is tolerable. Given a simplified baseline model, we first evaluate the how well Bloomier filters encode sparse weights. Results for Bloomier encoding are presented in TAB2, and show that the filters work exceptionally well. In the extreme case, the large fully connected layer in LeNet5 is compressed by 445×. With encoding alone and demonstrates a 1.99× improvement over CSR, the alternative encoding strategy used in Deep Compression. Recall that the size of a Bloomier filter is proportional to mt, and so sparsity and clustering determine how compact they can be. Our suggest that sparsity is more important than the number of clusters for reducing the encoding filter size. This can be seen by comparing each LeNet models' magnitude pruning to those of dynamic network surgery-while DNS needs additional clusters, the increased sparsity in a substantial size reduction. We suspect this is due to the ability of DNNs to tolerate a high false positive rate. The t value used here is already on the exponential part of the false positive curve (see FIG1 . At this point, even if k could be reduced, it is unlikely t can be since the additional encoding strength saved by reducing k does little to protect against the doubling of false positives when in this range. For VGG-16 FC-0, there are more false positives in the reconstructed weight matrix than there are non-zero weights originally; using t = 6 in over 6.2 million false positives while after simplification there are only 3.07 million weights. Before recovered with retraining, Bloomier filter encoding increased the top-1 error by 2.0 percentage points. This is why we see Bloomier filters work so well here-most applications cannot function with this level of approximation, nor do they have an analogous retrain mechanism to mitigate the errors' effects. BID14 propose using Huffman coding for their, and we use arithmetic coding, as described in Section 3.2. The in TAB3 show that while Deep Compression gets relatively more benefit from a final compression stage, Weightless remains a substantially better scheme overall. Prior work by BID21 on regular Bloom filters has shown that they can be optimized Figure 4: Weightless exploits sparsity more effectively than Deep Compression. By setting pruning thresholds to produce specific nonzero ratios, we can quantify sparsity scaling. There is no loss of accuracy at any point in this plot.for better post-compression size. We believe a similar method could be used on Bloomier filters, but we leave this for future work. Recent work continues to demonstrate better ways to extract sparsity from DNNs BID12 BID26 BID22, so it is useful to quantify how different encoding techniques scale with increasing sparsity. As a proxy for improved pruning techniques, we set the threshold for magnitude pruning to produce varying ratios of nonzero values for LeNet5 FC-0. We then perform retraining and clustering as usual and compare encoding with Weightless and Deep Compression (all without loss of accuracy). Figure 4 shows that as sparsity increases, Weightless delivers far better compression ratios. Because the false positive rate of Bloomier filters is controlled independent of the number of nonzero entries and addresses are hashed not stored, Weightless tends to scale very well with sparsity. On the other hand, as the total number of entries in CSR decreases, the magnitude of every index grows slightly, offsetting some of the benefits. This paper demonstrates a novel lossy encoding scheme, called Weightless, for compressing sparse weights in deep neural networks. The lossy property of Weightless stems from its use of the Bloomier filter, a probabilistic data structure for approximately encoding functions. By first simplifying a model with weight pruning and clustering, we transform its weights to best align with the properties of the Bloomier filter to maximize compression. Combined, Weightless achieves compression of up to 496×, improving the previous state-of-the-art by 1.51×.We also see avenues for continuing this line of research. First, as better mechanisms for pruning model weights are discovered, end-to-end compression with Weightless will improve commensurately. Second, the theory community has already developed more advanced-albeit more complicatedconstruction algorithms for Bloomier filters, which promise asymptotically better space utilization compared to the method used in this paper. Finally, by demonstrating the opportunity for using lossy encoding schemes for model compression, we hope we have opened the door for more research on encoding algorithms and novel uses of probabilistic data structures. Given a set of key value pairs, in this case weight addresses and cluster indexes, a Bloomier filter is constructed as follows. The task of construction is to find a unique location in the table for each key such that the value can be encoded there. For each to be encoded key a neighborhood of 3 hash digests are generated, indexes of these 3 are named iotas. To begin, the neighborhoods of all keys are computed and the unique digests are identified. Keys with unique neighbors are removed from the list of keys to be encoded. When a key is associated with a unique location, its iota value (i.e., the unique neighbor index into the neighborhood of hashs), is saved in an ordered list along with the key itself. The process is repeated for the remaining keys and continues until either all keys identify a unique location or none can be found. In the case that this search fails a different hash function can be tried or a larger table is required (increasing m).Once the unique locations and ordering is known, the encoding can begin. Values are encoded into the filter in the reverse order in which the unique locations are found during the search described above. This is done such that as non-unique neighbors of keys collide, they can still resolve the correct encoded values. For each key, the unique neighbor (indexed by the saved iota value) is set as the XOR of all neighbor filter entries (each a t-bit length vector), a random mask M, and the key's value. As the earlier found unique key locations are populated in the filter, it is likely that the neighbor values will be non-zero. By XORing together them in reverse order the encoding scheme guarantees that the correct value is retrieved. (The XORs can be thought of cancelling out all the erroneous information except that of the true value.)An implementation of the Bloomier filter will be released along with publication. Validation Error (%) Figure 5: Plot of retraining (as described in Section 3.2) VGG16's FC-1 after encoding FC-0 in a Bloomier filter. The baseline validation accuracy (on 25000 samples) is 30% and after encoding FC-0, with a Bloomier filter, this increases to 33%. As the plots hows, after a few epochs pass the error introduced by the lossy encoding is recovered. In comparison, test accuracy after encoding FC-0 and before retraining FC-1 is 38.0% and goes down by 2% to 36.0% which maintains the model's baseline performance. Figure 6: Sensitivity analysis of t with respect to the number of weight clusters, k, for FC-0 in LeNet-300-100. As per FIG1, model error decreases with the number of false positives, which are determined by the difference between t and k. This is emphasized here as k = 8 performs strictly better than higher k values as it in the strongest encoding given t. Figure 7: Demonstrating iso-accuracy comparisons between previous state-of-the-art DNN compression techniques and Weightless for FC-0 in LeNet5. In comparison to TAB3, we see that with respect to accuracy, Weightless' compression ratio scales significantly better than Deep Compression. This is advantageous in applications that are amenable to reasonable performance loss. To generate this plot we swept the pruning threshold for both Weightless and Deep Compression, and t from 6 to 9 for Weightless. | We propose a new way to compress neural networks using probabilistic data structures. | 1,265 | scitldr |
Recent work has shown that contextualized word representations derived from neural machine translation (NMT) are a viable alternative to such from simple word predictions tasks. This is because the internal understanding that needs to be built in order to be able to translate from one language to another is much more comprehensive. Unfortunately, computational and memory limitations as of present prevent NMT models from using large word vocabularies, and thus alternatives such as subword units (BPE and morphological segmentations) and characters have been used. Here we study the impact of using different kinds of units on the quality of the ing representations when used to model syntax, semantics, and morphology. We found that while representations derived from subwords are slightly better for modeling syntax, character-based representations are superior for modeling morphology and are also more robust to noisy input. Recent years have seen the rise of deep neural networks and the subsequent rise of representation learning based on network-internal activations. Such representations have been shown useful when addressing various problems from fields such as image recognition, speech recognition BID2, and natural language processing (NLP) BID30. The central idea is that the internal representations trained to solve an NLP task could be useful for other tasks as well. For example, word embeddings learned for a simple word prediction task in context, word2vec-style BID31, have now become almost obligatory in state-ofthe-art NLP models. One issue with such word embeddings is that the ing representation is context-independent. Recently, it has been shown that huge performance gains can be achieved by contextualizing the representations, so that the same word could have a different embedding in different contexts. This is best achieved by changing the auxiliary task, e.g., the ElMo model learns contextualized word embeddings from a language modeling task, using LSTMs BID37.More recently, it has been shown that complex tasks such as neural machine translation can yield superior representations BID29. This is because the internal understanding of the input language that needs to be built by the network in order to be able to translate from one language to another needs to be much more comprehensive compared to what would be needed for a simple word prediction task. Such representations have yielded state-of-the-art for tasks such as sentiment analysis, textual entailment, and question answering. Unfortunately, computational and memory limitations as of present prevent neural machine translation (NMT) models from using large-scale vocabularies, typically limiting them to 30-50k words. This is a severe limitation, as most NLP applications need to handle vocabularies of millions of words, e.g., word2vec BID31, GloVe BID36 and FastText BID32 offer pre-trained embeddings for 3M, 2M, and 2.5M words/phrases, respectively. The problem is typically addressed using byte-pair encoding (BPE), where words are segmented into pseudo-word character sequences based on frequency BID43. A somewhat less popular solution is to use characters as the basic unit of representation BID8. In the case of morphologically complex languages, another alternative is to reduce the vocabulary by using unsupervised morpheme segmentation BID6 ).The impact of using different units of representation in NMT models has been studied in previous work BID27 BID10 BID8 , among others), but the focus has been exclusively on the quality of the ing translation output. However, it remains unclear what input and output units should be chosen if we are primarily interested in representation learning. Here, we aim at bridging this gap by evaluating the quality of NMT-derived embeddings originating from units of different granularity when used for modeling morphology, syntax, and semantics (as opposed to end tasks such as sentiment analysis and question answering). Our contributions can be summarized as follows:• We study the impact of using words vs. characters vs. BPE units vs. morphological segments on the quality of representations learned by NMT models when used to model morphology, syntax, and semantics.• We further study the robustness of these representations with respect to noise.• We make practical recommendations based on our . We found that while representations derived from morphological segments are better for modeling syntax, character-based ones are superior for morphology and are also more robust to noise. Representation analysis aims at demystifying what is learned inside the neural network black-box. This includes analyzing word and sentence embeddings BID1 BID39 BID12 , among others), RNN states (a; BID44 BID52 BID50, and NMT representations BID44 BID4, as applied to morphological BID39 BID49, semantic BID39 and syntactic BID28 BID47 BID9 tasks. While previous work focused on words, here we compare units of different granularities. Subword translation units aim at reducing vocabulary size and OOV rate. NMT researchers have used BPE units BID43, morphological segmentation BID6, characters, and hybrid units BID27 BID10 . There have also been comparisons between subword units in the context of NMT BID42 . Unlike this work, here we focus on representation learning rather than on translation quality. Robustness to noise is an important aspect in machine learning. It has been studied for various machine learning models BID46 BID15, including NLP models BID34 BID41 BID11 BID13 BID21, and character-based NMT models BID17 BID3 . Unlike the above work, we compare robustness to noise for units of different granularity. Moreover, we focus on representation learning rather than translation. Our methodology is inspired by research on interpreting neural network (NN) models. A typical framework involves extracting feature representations from different components (e.g., encoder/decoder) of a trained model and then training a classifier to make predictions for an auxiliary task. The performance of the trained classifier is considered to be a proxy for judging the quality of the extracted representations with respect to the particular auxiliary task. Formally, for each input word x i we extract the LSTM hidden states from each layer of the encoder/decoder. We concatenate the representations of layers and we use them as feature vector z i for the auxiliary task. We train a logistic regression classifier by minimizing the cross-entropy loss: DISPLAYFORM0 is the probability that word x i is assigned label l. The weights θ ∈ R D×L are learned with gradient descent. Here D is the dimensionality of the latent representations z i and L is the size of the label set for P. We consider four representation units: words, byte-pair encoding (BPE) units, morphological units, and characters. TAB0 shows an example of each representation unit. BPE splits words into symbols (a symbol is a sequence of characters) and then iteratively replaces the most frequent sequences of symbols with a new merged symbol. In essence, frequent character n-gram sequences merge to form one symbol. The number of merge operations is controlled by a hyper-parameter OP, which directly affects the granularity of segmentation: a high value of OP means coarse segmentation and a low value means fine-grained segmentation. For morphologically segmented units, we use an unsupervised morphological segmenter, Morfessor BID45. Note that although BPE and Morfessor segment words at a similar level of granularity, the segmentation generated by Morfessor is linguistically motivated. For example, it splits the gerund shooting into base verb shoot and the suffix ing. Compare this to the BPE segmentation sho + oting, which has no linguistic justification. On the extreme, the fully character-level units treat each word as a sequence of characters. Extracting Activations for Subword and Character Units Previous work on analyzing NMT representations has been limited to the analysis of word representations only, 1 where there is a oneto-one mapping from input units (words) and their NMT representations (hidden states) to their linguistic annotations (e.g., morphological tags). In the case of subword-based systems, each word may be split into multiple subword units, and each unit has its own representation. It is less trivial to define which representations should be evaluated when predicting a word-level property. We consider two simple approximations to estimate a word representation from subword representations: (i) Average: for each word, average the activation values of all the subwords (or characters) comprising it. In the case of a bi-directional encoder, we concatenate the averages from the forward and the backward activations of the encoder on the subwords (or characters) that represent the current word. (ii) Last: consider the activation of the last subword (or character) as the representation of the word. For the bi-directional encoder, we concatenate the forward encoder's activation on the last subword unit with the backward encoder's activation on the first subword unit. This formalization allows us to analyze character-and subword-based representations at the word level via prediction tasks. Such kind of analysis has not been performed before. We choose three fundamental NLP tasks that serve as a good representative of various properties inherent in a language, ranging from morphology (word structure), syntax (grammar) and semantics (meaning). In particular, we experiment with morphological tagging for German, Czech, Russian and English 2 languages, lexical semantics tagging for English and German languages, and syntactic tagging via CCG supertagging for English language. TAB1 shows an example sentence with annotations of each task. The morphological tags capture word structure, semantic tags show semantic property, and syntax tags (CCG super tags) captures global syntactic information locally at the lexical level. For example in TAB1, -the morphological tag VBZ for the word "receives", marks that it is a verb with non-third person singular present property, the semantic tag ENS describes a present simple event category, and the syntactic tag S[dcl]\NP)/NP indicates that the preposition "in" attaches to the verb. Recent studies have shown that small perturbations in the input can cause significant deterioration in the performance of the deep neural networks. Here, we evaluate the robustness of various representations under noisy input conditions. We use corpora of real errors harvested by BID3. The errors contain a good mix of typos, misspellings, and other kinds of errors. In addition, we create data with synthetic noise. We induced two kinds of errors: i) Swap and Middle. Swap is a common error which occurs when neighboring characters are mistakenly swapped (e.g., word → wodr). In Middle errors, the order of the first and the last characters of a word are preserved while the middle characters are randomly shuffled BID40 ) (e.g., example→ eaxmlpe). We corrupt (using swap or middle) or replace (using real errors corpora) n% words randomly in each test sentence. We then re-extract feature vectors for the erroneous words in a sentence and re-evaluate the prediction capability of these embeddings on the linguistic tasks. Data and Languages: We trained NMT systems for 4 language pairs: German-English, CzechEnglish, Russian-English and English-German, using data made available through the two popular machine translation campaigns, namely, WMT BID5 and IWSLT BID7. The MT models were trained using a concatenation of NEWS and TED training data. We used official TED testsets (testsets-11-13) to report translation quality BID35. The morphological classifiers were trained and tested on a concatenation of NEWS and TED testsets, which were automatically tagged as described in the next section. Semantic and syntactic classifiers were trained and tested on existing annotated corpora. Statistics are shown in TAB2.Taggers: We used RDRPOST to annotate data for the classifier. For semantic tagging, we used the the Groningen Parallel Meaning Bank BID0. The tags TAB2 for statistics. We used seq2seq-attn BID23 to train 2-layered attentional long short-term memory (LSTM) BID18 encoder-decoder systems with bidirectional encoder. We used 500 dimensions for both word embeddings and LSTM states. We trained systems with SGD for 20 epochs and used the final model for generating features for the classifier. We trained the systems in both *-to-English and English-to-* directions and analyze the representations from both encoder and decoder. To analyze the encoder-side, we fix the decoder-side with BPE-based embeddings and train the source-side with word/BPE/Morfessor/char units. Similarly, to analyze the decoder-side, we train the encoder representation with BPE units and vary the decoder side with different input units. Our motivation for this setup is to analyze representations in isolation keeping the other half of the network static across different settings. We use 50k BPE operations and limit the vocabulary of all systems to 50k. The word/BPE/Morfessor/characterbased systems were trained with sentence lengths of 80/100/100/400, respectively. The classifier is a logistic regression whose input is either hidden states in word-based models, or Last or Average representations in character-and subword-based models. Since we concatenate forward and backward states from all layers, this ends up being 2000/1000 dimensions when classifying the encoder/decoder: 500 dimensions×2 layers×2 directions (1 for decoder). The classifiers are trained for 10 epochs. The encoder models are trained with BPE as target and the decoder models with BPE as a source. We now present the of using representations learned from different input units on the task of predicting morphology, semantics and syntax. For subword and character units, we found that the activation of the last subword/character unit of a word consistently better than using the average of all activations. So we present the using Last method only and discuss this more later. FIG0 summarizes the for predicting morphological tags with representations learned using different units. The character-based representations consistently outperformed other representations on all language pairs while the word-based representations achieved the lowest accuracy. The differences are more significant in the case of languages with relatively complex morphology, Czech and Russian. We see a difference of up to 14% in favor of using character-based representations when compared with the word-based representations. The improvement is minimal in the case of English (1.2%), which is a morphologically simpler language. Comparing subword units as obtained using Morfessor and BPE, we found Morfessor to give much better morphological tagging performance especially in the case of morphologically rich languages, Czech and Russian. This is due to the linguistically motivated segmentations which are helpful in for learning language morphology. We further investigated whether the performance difference between various representation is due to the difference in modeling infrequent and out-of-vocabulary words. TAB4 shows the OOVs rate of each language which is higher for morphologically rich languages. Figure 2 shows that the gap between different representations is inversely related to the frequency of the word in the training data: character-based models perform much better than others on less frequent and OOV words. Decoder Representations: Next, we used the decoder representations from the English-to-* models. We saw a similar performance trend as in the case of encoder-side representations, character units performed the best while word units performed the worst. Also morphological units performed better than the BPE-based units. Comparing encoder representation with decoder representation, it is interesting to see that in several cases the decoder-side representations performed better than the encoder-side representations, even though they are trained using a uni-directional LSTM only. Since we did not see any difference in trend between encoder and decoder side representations, we only present the encoder side in the later part of the paper. Figure 3 summarizes the on the semantic tagging task. On English, subword-based (BPE and Morfessor) representations and character-based representation achieve comparable . However, for German, BPE-based representations performed better than the other representations. These contrast to morphology prediction , where character-based representations were consistently better compared to their subword-based counterparts. The final property we evaluate is CCG super-tagging, reflecting syntactic knowledge. Here we only have English tags, so we evaluate encoder representations from English→German models, trained with words, characters, and subwords. We found that morphologically segmented representation units perform the best while words and BPE-based representations perform comparable. The characters-based representations lag behind, though the difference between accuracy is small compared to the morphological tagging . 4 It is noteworthy that characters perform below both words and subwords here, contrary to their superior performance on the task of morphology. We will return to this point in the discussion in Section 6. We now evaluate the robustness of the representations towards noise. We induce errors in the testsets by corrupting 25% of the words in each sentence using different error types (synthetic or real noise), as described in Section 3.3. We extract the representations of the noisy testsets and re-evaluate the classifiers. FIG2 shows the performance on each task. Evidently, characters yield much better performance on all tasks and for all languages, showing minimal drop in the accuracy, in contrast to earlier where they did not outperform subword units 5 on the task of syntactic tagging. This shows character-based representations are more robust towards noise compared to others. Surprisingly in a few cases, BPE-based representations performed even worst than word-based representations, e.g. in the case of Syntactic tagging (80.3 vs. 81.1). We hypothesize that BPE segments a noisy word into two known subword units that may have no close relationship with the actual word. Using representations of wrong subword units ed in a significant drop in performance. We further investigated the robustness of each classifier by increasing the percentage of noise in the test data and found that the difference in representation quality stays constant across BPE and Comparing Performance Across Tasks Character-based representations outperformed in the case of morphological tagging; BPE-based representations performed better than others in the semantic tagging task for German (and about the same in English); and Morfessor performed slightly better than others for syntax. Syntactic tagging requires knowledge of the complete sentence. Splitting a sentence into characters substantially increases the length (from 50 words in a sentence to 250 characters on average) of the sentence. The character-based models lack in capturing long distance dependencies, which could be a reason for their low performance in this task. Similarly, in case of morphological tagging, the information about the morphology of a word is dependent on the surrounding words plus internal information (root, morphemes etc.) presents in the word. The character-based system has access to all of this information which in high tagging performance. Morfessor performed better than BPE in the morphological tagging task because its segments are linguistically motivated units (segmented into root + morphemes), making the information about the word morphology explicit in the representation. In comparison, BPE solely focuses on the frequency of characters occurring together in the corpus and can yield linguistically incorrect units. TAB3 summarizes the translation performance of each system. In most of the cases, the subword-based systems perform better than the word-based and character-based systems. However, this is not true in the case of using their representations as feature in the core NLP tasks. For example, we found that character-based representations perform better than others in the morphological tagging task. On an additional note, BPE-based representations although perform better for some tasks, are sensitive to noise. Their ability to segment any unknown words into two known subwords in less reliable systems. Notably, the translation performance of the BPE-based system falls below the character-based system even with 10% noise only. The variation in the performance of the representations reflect that they may be learning different aspects of the language. To investigate whether representations are complementary to each other, we train the classifier on their concatenation. TAB5 summarizes the on the morphological tagging task. The performance of the classifier improved in all combinations of representations while the best are achieved using all three units together. We studied the impact of using different representation units -words, characters, BPE units, and morphological segments on the representations learned by NMT. Unlike previous work, which targeted end tasks such as sentiment analysis and question answering, here we focused on modeling morphology, syntax and semantics. We found that (i) while representations derived from subwords units are slightly better for modeling syntax, (ii) character representations are distinctly better for modeling morphology, and (iii) are also more robust to noise in contrast to subword representations, (iv) and that using all representations together works best. Based on our findings, we conjecture that although BPE segmentation is a de-facto standard in building state-of-the-art NMT systems, the underlying representations it yields are suboptimal for external tasks. Character-based representations provide a more viable and robust alternative in this regard, followed by morphological segmentation. In future work, we plan to explore specialized character-based architectures for NMT. We further want to study how different units affect representation quality in non-recurrent models such as the Transformer BID48 and in convolutional architectures BID14.A SUPPLEMENTARY MATERIAL | We study the impact of using different kinds of subword units on the quality of the resulting representations when used to model syntax, semantics, and morphology. | 1,266 | scitldr |
Few-shot learning is the process of learning novel classes using only a few examples and it remains a challenging task in machine learning. Many sophisticated few-shot learning algorithms have been proposed based on the notion that networks can easily overfit to novel examples if they are simply fine-tuned using only a few examples. In this study, we show that in the commonly used low-resolution mini-ImageNet dataset, the fine-tuning method achieves higher accuracy than common few-shot learning algorithms in the 1-shot task and nearly the same accuracy as that of the state-of-the-art algorithm in the 5-shot task. We then evaluate our method with more practical tasks, namely the high-resolution single-domain and cross-domain tasks. With both tasks, we show that our method achieves higher accuracy than common few-shot learning algorithms. We further analyze the experimental and show that: 1) the retraining process can be stabilized by employing a low learning rate, 2) using adaptive gradient optimizers during fine-tuning can increase test accuracy, and 3) test accuracy can be improved by updating the entire network when a large domain-shift exists between base and novel classes. Previous studies have shown that high image classification performance can be achieved by using deep networks and big datasets (; ; ;). However, the performances of these algorithms rely heavily on extensive manually annotated images, and considerable cost is often incurred in preparing these datasets. To avoid this problem, few-shot learning, which is a task of learning novel classes using only a few examples, has been actively researched. However, few-shot learning remains a considerably challenging task in machine learning, and classification accuracy in few-shot tasks is much lower than that of the many-shot regime. This is because a network pretrained using base classes must adapt to novel classes using only a few examples. The simplest means of overcoming this difficulty is to fine-tune the network using novel classes. However, the number of trainable parameters of deep networks is so large that we believe that networks can easily overfit to novel classes if we simply fine-tune the networks using only a few examples. For example, the number of trainable parameters in the ResNet-152 is approximately 60 M, which is much greater than the number of novel examples (e.g., 25 for 5-way 5-shot learning), and this leads us to the idea of overfitting. Using various sophisticated methods, numerous studies have been conducted to prevent networks from overfitting. However, the performance of a naive fine-tuning method has not been well investigated, and has pointed out that performance of this method had been underestimated in previous studies. Therefore, in this study, we analyze the performance of a fine-tuning method and show that it can achieve higher classification accuracy than common few-shot learning methods and, in some cases, can achieve an accuracy approximating that of the state-of-the-art algorithm. We also experimentally show that: 1) a low learning rate stabilizes the retraining process, 2) using an adaptive gradient optimizer when fine-tuning the network increases test accuracy, and 3) updating the entire network increases test accuracy when a large domain shift occurs between base and novel classes. To evaluate accuracy in few-shot image classification tasks, the mini-ImageNet dataset has been used in many previous studies. This is a subset of the ImageNet dataset in which each image is resized to 84 × 84 to reduce computational cost.the high-resolution mini-ImageNet dataset and cross-domain dataset. Both datasets contain higherresolution images than the original mini-ImageNet dataset, and the cross-domain dataset represents a greater challenge because base and novel classes are sampled from different datasets. Thus, a larger domain shift occurs between these classes. In this study, we evaluate the performance of our method using the high-resolution mini-ImageNet dataset (high-resolution single-domain task) and cross-domain dataset (cross-domain task) as well as the common low-resolution mini-ImageNet dataset (low-resolution single-domain task). Details of these datasets are provided in Section 2.3. The main contributions of this study are as follows: 1) We show that in the common low-resolution single-domain task, our fine-tuning method achieves higher accuracy than common few-shot learning algorithms in the 1-shot task and nearly the same accuracy as that of the state-of-the-art method in the 5-shot task. We also show that our method achieves higher accuracy than common few-shot learning methods both in the high-resolution single-domain and cross-domain tasks. Note that we do not compare the performance of our method with the state-of-the-art algorithm in the high-resolution single-domain and cross-domain tasks because the performances for these tasks are not reported in the corresponding papers. 2) We further analyze the experimental and show that a low learning rate stabilizes the relearning process, that test accuracy can be increased by using an adaptive gradient optimizer such as the Adam optimizer, and that updating the entire network can increase test accuracy when a large domain shift occurs. 2 OVERVIEW OF FEW-SHOT LEARNING 2.1 NOTATION Few-shot learning is a task of learning novel classes using only a few labeled examples. This task is also called N -way K-shot learning, where N denotes the number of novel classes and K is the number of labeled examples per class. We focus on the 5-way learning task such as in previous studies . Labeled and unlabeled examples of novel classes are called support and query sets, respectively. A network is pretrained using base classes, which contain numerous labeled examples. Base and novel classes are mutually exclusive. Base classes are used for pretraining, and novel classes are used for retraining and testing. Validation classes are used to determine a learning rate and the number of epochs required to retrain the network. To date, numerous few-shot learning algorithms have been proposed, and these methods can be roughly classified into three categories: learning discriminative embedding using metric-based classification, learning to learn novel classes, and data-augmentation using synthetic data. Metric-learning approaches such as MatchingNet and ProtoNet tackle few-shot classification tasks by training an embedding function and applying a differentiable nearest-neighbor method to the feature space using the Euclidean metric. RelationNet was developed to replace the nearest-neighbor method with a trainable metric using convolutional and fully connected (FC) layers and has achieved higher few-shot classification accuracy. proposed a method called weight imprinting. They showed that including normalized feature vectors of novel classes in the final layer weight provides effective initialization for novel classes. We use the weight-imprinting method to initialize the last FC layer before finetuning the network. These conventional methods successfully retrain networks using novel classes while preventing overfitting, but we show that few-shot classification performance can be further improved by fine-tuning networks. Meta-learning-based approaches address the few-shot learning problem by training networks to learn to learn novel classes. focused on the similarity between gradient descent methods and long short-term memory (LSTM) , and they achieved a fast adaptation to novel classes by using LSTM to update network weights. proposed a method to train a network to obtain easily adaptable parameters so that the network can adapt to novel classes by means of a few gradient steps using only a few examples. In these algorithms, networks are explicitly trained to learn how to adapt to novel classes. However, we show that networks pretrained without explicit meta-learning methods can also learn novel classes and achieve high few-shot classification accuracy. Data-augmentation-based approaches overcome data deficiencies by generating synthetic examples of novel classes. Some methods synthesize examples of novel classes by applying withinclass differences of base classes to real examples of novel classes . integrated a feature generator using a few-shot learning process and succeeded in generating synthetic data using only a few novel examples. These methods succeeded in improving the performance of few-shot learning by using synthetic examples for retraining. Nevertheless, we show that networks can adapt to novel classes by using only naive data-augmentation methods such as image flipping and image jittering. The mini-ImageNet dataset is a well-known dataset used to evaluate few-shot learning methods. The dataset was first proposed by , but the train/validation/test split proposed by is often used instead. Therefore, we used this split in this study. This dataset is a subset of the ImageNet dataset and contains 100 classes with 600 examples of each class. The classes are split into 64 base, 16 validation, and 20 novel classes. Images in this dataset are resized to 84 × 84 to reduce computational cost. used a higher-resolution mini-ImageNet dataset with an image resolution of 224 × 224 to employ deeper networks. They also revealed that the domain shift that occurs between base and novel classes in the mini-ImageNet dataset is small because the classes are sampled in the same dataset. The authors proposed the cross-domain dataset, which has a larger domain shift between these classes. In this dataset, the whole mini-ImageNet dataset is used as a set of base classes, and randomly sampled 50 and 50 classes from the CUB-200-2011 dataset are used as validation and novel classes, respectively. These datasets are more practical because they use high-resolution images, and the cross-domain dataset is more challenging because the domain shift that occurs between base and novel classes is larger. Therefore, we use the highresolution mini-ImageNet dataset and cross-domain dataset for evaluation as well as the common low-resolution mini-ImageNet dataset. The ResNet-18/34/50/101/152 and VGG-16 without FC layers are used as feature extractors in this study. Note that the last MaxPool2d layer of the VGG-16 is replaced by the GlobalAveragePool2d layer to support different resolutions of input images. We also use the simple classifier (i.e., common FC layer) and the normalized classifier. The technique known as weight imprinting is used in the normalized classifier; the normalized classifier is illustrated in Figure 1. Before fine-tuning the normalized network for novel classes, we initialize classifier weight W by deleting the weight and inserting columns for novel classes, as shown in Figure 1. Regarding the simple classifier, the initial weight for novel classes can be obtained by applying the multi-class linear SVM to feature vectors of novel classes. We evaluated our method using the low-resolution mini-ImageNet dataset as a common evaluation dataset. In addition, we used the high-resolution mini-ImageNet and cross-domain datasets as more practical datasets. Details of these datasets are provided in Section 2.3. In this study, we identify tasks that use these datasets as low-resolution single-domain task, the high-resolution single-domain task, and cross-domain tasks.. Each column w i ∈ R d of the classifier weight is normalized so that w i has a norm of 1 (i.e., ∥w i ∥ = 1), and classification is performed by taking the inner products between w i and normalized feature vectorẑ ∈ R d. Note that variable d is the dimension of the feature space. Before the network is fine-tuned, the initial weight for a novel class can be obtained by including feature vectorẑ of the novel class in the classifier weight. When multiple novel examples per class are available (i.e., K-shot learning with K > 1), the initial weight for the novel class can be obtained by normalizing class mean 1/K ∑ K j=1ẑ j again. Because the output range of Wẑ is [−1, 1], ensuring that the probability of the correct label approximates 1 using softmax activation is difficult. This problem can be avoided by applying scale factor s ∈ R to the output, as discussed by. The networks were pretrained by using the base classes of the datasets for 600 epochs. We used the Adam optimizer with a learning rate of 0.001 in the same manner as. These parameters for pretraining are normally optimized using validation classes, but we fixed these parameters to reduce computational cost. Input images were preprocessed by random-resized cropping with a size of 224 × 224. We also performed color jittering and random-horizontal flipping, and we subtracted channel-wise means of the ImageNet dataset (0.485, 0.456, 0.406). In addition, division by channel-wise standard deviations of the ImageNet dataset (0.229, 0.224, 0.225) was performed in the same manner as. Note that in the low-resolution single-domain task, the size of the random-resized cropping was set to 84 × 84. In this study, we compared three fine-tuning methods in which: 1) the entire network is updated, 2) the classifier weight and batch-normalization (BN) statistics are updated, and 3) only the classifier weight is updated. The third method is a common fine-tuning method to prevent overfitting. The second method is based on a previous study that successfully fine-tuned an image generator to a novel class without overfitting by updating only the BN statistics (i.e., γ and β of BN layers). A similar approach is known as meta-transfer learning (MTL) . The authors who proposed MTL showed that updating only scales and biases of network parameters prevents them from overfitting while achieving efficient adaptation to unseen tasks. Although the methods proposed by and presented similar ideas, we chose the former because of its simplicity in implementation. Initial classifier weights for novel classes were obtained before we fine-tuned the networks, as discussed in Section 3.1. The networks were retrained with mini-batch-based learning with a batch size of N K in the N -way K-shot learning scenario. The learning rate and number of epochs for finetuning were determined by using validation classes. We evaluated few-shot classification accuracy by calculating the mean accuracy of 600 trials using randomly sampled classes and examples in the novel classes. We also calculated the 95% confidence interval of the mean accuracy. In the validation, test, and network initialization phases for the novel classes, input images were preprocessed by resizing to 256 × 256, center-cropping to a size of 224 × 224, subtracting channel-wise means of the ImageNet dataset (0.485, 0.456, 0.406), and dividing by channel-wise standard-deviations of the ImageNet dataset (0.229, 0.224, 0.225) in the same manner as. The input preprocessing for fine-tuning phase was the same as discussed in Section 3.3. Table 1: Performance of our method in the 5-way low-resolution single-domain task. "Normalized" and "Simple" mean that the normalized and simple classifiers are used, respectively. "All", "BN & FC", and "FC" mean the following: the entire network was updated; the BN and FC layer were updated; only the FC layers were updated. "w/o FT" refers to performance without fine-tuning using novel classes. Values with the † mark refer to classification accuracy without fine-tuning, as validation accuracy was not increased by fine-tuning the network. The * mark means that the classification accuracy for novel classes was not available because the loss value did not decrease in the pretraining phase. The -mark means that we did not conduct an experiment because the network did not have BN layers. Table 2: Performance of our method in the 5-way high-resolution single-domain task. "Normalized" and "Simple" mean that the normalized and simple classifiers were used, respectively. "All", "BN & FC", and "FC" mean the following: the entire network was updated; the BN and FC layer were updated; only the FC layers were updated. "w/o FT" refers to performance without fine-tuning using novel classes. Values with the † mark refer to classification accuracy without fine-tuning, as validation accuracy was not increased by fine-tuning the network. The * mark means that classification accuracy for novel classes was not available because the loss value did not decrease in the pretraining phase. The -mark means that we did not conduct an experiment because the network did not have BN layers. Few-shot classification accuracies for the low-resolution single-domain task, high-resolution singledomain task, and cross-domain task are listed in Tables 1, 2, and 3, respectively. Table 1 shows that the classification accuracy could be increased by approximately 6% when the VGG-16 and normalized classifier were used in the 1-shot learning task. However, the accuracy could not be further improved by updating the entire network in the 1-shot learning task. However, classification accuracy could be further improved by updating the entire network in the 5-shot task. We assume that this was because the within-class difference could be reduced by fine-tuning the feature extractor when multiple novel examples were available. By comparing the for the high-resolution single-domain (Table 2) and low-resolution (Table 1) tasks, it could be argued that the robustness against low-resolution inputs differs depending on the feature extractor. For example, by comparing the for 5-shot "Normalized all" in Table 1 and 2, we can see that the classification accuracy of the ResNet-152 decreased by 11.0% whereas that of the VGG-16 decreased by only 4.3%. This implies that the robustness against low-resolution inputs should also be considered and that evaluating only few-shot learning performance is difficult. Although the low-resolution mini-ImageNet dataset is extremely useful for doing fast experiments, we must reconsider the validity of the dataset for evaluation of few-shot learning performance. Table 3: Performance of our method in the 5-way cross-domain task. "Normalized" and "Simple" mean that the normalized and simple classifiers were used, respectively. "All", "BN & FC", and "FC" mean the following: the entire network was updated; the BN and FC layer were updated; only the FC layers were updated. "w/o FT" refers to performance without fine-tuning using novel classes. Values with the † mark refer to classification accuracy without fine-tuning, as validation accuracy was not increased by fine-tuning the network. The * mark means that classification accuracy for novel classes was not available because the loss value did not decrease in the pretraining phase. The -mark means that we did not conduct an experiment because the network did not have BN layers. Table 4: Comparison between our method and conventional methods. We show several from different networks in Section 3.5, and therefore in this table show the highest accuracy for each task. More specifically, we use the from "VGG-16 Normalized FC", "VGG-16 Normalized All", "VGG-16 Normalized FC", "ResNet-50 Normalized All", and "ResNet-50 Normalized All" from left to right in this table. Values with the ‡ marks were reported by , and other values were reported in the original studies. The -mark means that the classification accuracy for the task was not reported. 59.9 69.7 --- The comparison of the from the cross-domain task (Table 3) and high-resolution singledomain task shows that the performance decreased in the cross-domain task. This can be explained by the larger domain shift that occurs between base and novel classes, as indicated by. In addition, the difference in classification accuracy between the cross-domain task and high-resolution single-domain task was decreased by fine-tuning the entire network. For example, in the 5-shot learning task using the VGG-16, the difference in classification accuracy between the single-domain and cross-domain tasks was 15.9% without fine-tuning, but it could be reduced to 6.9% by fine-tuning the entire network. This means that the network could adapt to a large domain shift by having the entire network updated. We discuss this further in greater detail in Section 3.7. 3.6 COMPARATIVE EVALUATION Table 4 shows comparative of ours and previous methods. Note that we use the best for each task as given in Section 3.5 because we obtained several from different networks. In the 1-shot low-resolution single-domain task, the classification accuracy was lower than that of the state-of-the-art algorithm, but it was still higher than those of other common few-shot learning methods such as MatchingNet and ProtoNet. It is interesting to note that our method achieved nearly the same classification accuracy as that of the state-of-the-art method in the 5-shot task. The reason for the higher performance in the 5-shot task may be that the within-class variance could be reduced by fine-tuning the entire network using several examples per class. In addition, we achieved higher classification accuracy than the reported values of conventional methods both in the high-resolution single-domain and cross-domain tasks. The difference in classification accuracy between the 5-shot high-resolution single-domain and cross-domain tasks was only approximately 5% with our method, whereas the performance of the conventional methods decreased by more than 10% in the cross-domain task. This shows that our method successfully We used the ResNet-18, normalized classifier, and Adam optimizer. We chose the 5-shot cross-domain task for visualization because the transition of validation accuracy is clearer than in other tasks. We set the learning rates as 0.01, 0.001, and 0.0001, and conducted four trials with randomly selected validation classes and support sets. Note that classification accuracy can be significantly changed by the randomly selected classes and samples. Therefore, we focused on the transition of the validation accuracy rather than the validation accuracy itself. reduced the effect of a large domain shift in the cross-domain task by updating the entire network for novel classes. We revealed in Section 3.5 and 3.6 that the fine-tuning method achieved high few-shot classification accuracy in many cases. In this section, we discuss the means of improving the performance of the fine-tuning method. We experimentally show that: • Using a low learning rate for fine-tuning stabilizes the retraining process. • Using adaptive gradient optimizers such as Adam increases the classification accuracy. • Higher performance can be obtained by updating the entire network when a large domain shift occurs. A learning rate is a critical parameter in training a network; this is also true for fine-tuning for fewshot learning. Figure 2 shows that the retraining process can be stabilized by using a lower learning rate. For example, the transition of the validation accuracy was unstable when the learning rate was set as 0.01 and 0.001. However, the validation accuracy increased in a stable manner when the learning rate was set as 0.0001, which is lower than that used in the pretraining phase (lr = 0.001). This means that the learning rate for few-shot fine-tuning should be set low. Here, we show that optimizers affect few-shot classification performance when the network is , RMSprop , Momentum-SGD, and ASGD . Of these, Adam, Adamax, Adadelta, Adagrad, and RMSprop are known as adaptive gradient methods. Figure 3 shows the classification accuracies for a 5-shot high-resolution single-domain task using the ResNet-18 with different optimizers. The show that higher classification accuracies could be obtained by using adaptive gradient optimizers, particularly when the normalized classifier was used. Although revealed that local minima obtained by the Adam optimizer lack a generalization ability, our experimental show that this was not necessarily true for few-shot learning. Why this occurs in few-shot learning is interesting, but this is beyond the scope of this study. Therefore, we leave this interesting direction for future works. Updating the Entire Network for Adaptation to a Large Domain Shift Figure 4 shows the relationship between test accuracy and the updated parts of the network. This shows that updating the entire network achieves higher accuracy, particularly when the normalized classifier is used. Considering the for the high-resolution single-domain task, when test accuracy was not further increased by updating the entire network, it could be argued that updating the entire network when a large domain shift occurs between base and novel classes is preferable. In this study, we showed that in the low-resolution single-domain task, our fine-tuning method achieved higher accuracy than common few-shot learning methods in the 1-shot task and nearly the same accuracy as the state-of-the-art method in the 5-shot task. We also evaluated our method with more practical tasks, such as the high-resolution single-domain and cross-domain tasks. In both tasks, our method achieved higher accuracy than common few-shot learning methods. We then experimentally showed that: 1) a low learning rate stabilizes the retraining process, 2) adaptive gradient optimizers such as Adam improve test accuracy, and 3) updating the entire network in higher accuracy when a large domain shift occurs. We believe that these insights into fine-tuning for few-shot learning tasks will help our community tackle this challenging task. | An empirical study that provides a novel perspective on few-shot learning, in which a fine-tuning method shows comparable accuracy to more complex state-of-the-art methods in several classification tasks. | 1,267 | scitldr |
We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks. We model the order stream as a stochastic process with finite history dependence, and employ a conditional Wasserstein GAN to capture history dependence of orders in a stock market. We test our approach with actual market and synthetic data on a number of different statistics, and find the generated data to be close to real data. Financial markets are among the most well-studied and closely watched complex multiagent systems in existence. Well-functioning financial markets are critical to the operation of a complex global economy, and small changes in the efficiency or stability of such markets can have enormous ramifications. Accurate modeling of financial markets can support improved design and regulation of these critical institutions. There is a vast literature on financial market modeling, though still a large gap between the state-of-art and the ideal. Analytic approaches provide insight through highly stylized model forms. Agent-based models accommodate greater dynamic complexity, and are often able to reproduce "stylized facts" of real-world markets BID10. Currently lacking, however, is a simulation capable of producing market data at high fidelity and high realism. Our aim is to develop such a model, to support a range of market design and analysis problems. This work provides a first step, learning a high-fidelity generator from real stock market data streams. Our main contribution is an approach to produce stock market data that is close to real market data, using a Wasserstein generative adversarial network (WGAN). There are many challenges that we overcome as part of this contribution. The first is how to represent a stream of stock market orders as data that can be used in a WGAN. Towards this end, we assume the stock market data stream to arise from a stochastic process with finite (but long) memory dependence. The stochastic process view also makes precise the conditional distribution that the generator is learning as well the joint distribution that the critic of the WGAN distinguishes by estimating the earth-mover distance. The second main challenge is the design of the network architecture. We choose a conditional WGAN to capture the history dependence of the stochastic process, with both the generator and critic conditional on history of orders and the time of day. A single LSTM layer is used to summarize the history succinctly. The internal architecture of both the generator and critic uses a standard convolutional structure. The generator outputs the next stock market order as well as how this order changes the active orders in the market. Part of the generator output, which updates the active market orders, is produced using a pre-trained network to approximate the deterministic buy and sell order matching in the stock market. Finally, we experiment with synthetic and real market data. The synthetic data is produced using a stock market simulator that has been used in several agent-based financial studies. The real data was obtained from OneMarketData, a financial data provider and publisher of the OneTick database product. We evaluate the generated data using various statistics such as the distribution of price and quantity of orders, inter-arrival times of orders, and the best bid and best ask evolution over time. We find the generated data matches the corresponding statistics in real data (simulated or actual stock market) closely. WGAN is a popular and well-known variant of GANs BID7. Most prior work on generation of sequences using GANs has been in the domain of text generation BID12 BID17. However, since the space of word representations is not continuous, the semantics change with nearby word representation, and given a lack of agreement on the metrics for measuring goodness of sentences, producing good quality text using GANs is still an active area of research. Stock market data does not suffer from this representation problem but the history dependence for stock markets can be much longer than for text generation. In a sequence of recent papers, BID15 have introduced GAN-based methods for generating point processes. The proposed methods generate the time for when the next event will occur. The authors have also explored the use of these techniques to generate the time for transaction events in stock markets. Our problem is richer as we aim to generate the actual limit orders including time, order type, price, and quantity information. Deep neural networks and machine learning techniques have been used on financial data mostly for prediction of transaction price BID9 BID3 BID13 and for prediction of actual returns BID0. As stated, our goal is not market prediction per se, but rather market modeling. Whereas the problems of learning to predict and generate may overlap (e.g., both aim to capture regularity in the domain), the evaluation criteria and end product are quite distinct. The stock market is a venue where equities or stocks of publicly held companies are traded. Nearly all stock markets follow the continuous double auction (CDA) mechanism BID6. Traders submit bids, or limit orders, specifying the maximum price at which they would be willing to buy a specified quantity of a security, or the minimum price at which they would be willing to sell a quantity. 1 The order book is a store that maintains the set of active orders: those submitted but not yet transacted or canceled. CDAs are continuous in the sense that when a new order matches an existing (incumbent) order in the order book, the market clears immediately and the trade is executed at the price of the incumbent order-which is then removed from the order book. Orders may be submitted at any time, and a buy order matches and transacts with a sell order when the limits of both parties can be mutually satisfied. For example, as shown in FIG0 if a limit buy order with price $10.01 and quantity 100 arrives and the order book has the best offered sell price at $10.01 with quantity 100 then the arriving order matches an incumbent exactly. However, the next buy order does not match any sell, and the following sell order partially matches what is then the best buy in the order book. The limit order book maintains the current active orders in the market (or the state of the market), which can be described in terms of the quantity offered to buy or sell across the range of price levels. Each order arrival changes the market state, recorded as an update to the order book. After processing any arrived order every buy price level is higher than all sell price levels, and the best bid refers to the lowest buy price level and the best ask refers to the highest sell price level. See FIG0 for an illustration. The order book is often approximated by few (e.g., ten) price levels above the best bid and ten price levels below the best ask; as these prices are typically the ones that dictate the transactions in the market. There are various kinds of traders in a stock market, ranging from individual investors to large investing firms. Thus, there is a wide variation in the nature of orders submitted for a security. We aim to generate orders for a security in aggregate (not per agent) that is close to the aggregate orders generated in a real market. We focus on generating orders and do not attempt to generate transactions in the stock market. This is justified as the CDA mechanism is deterministic and transactions can be determined exactly given a stream of orders. We model stock market orders as a stochastic process. Recall that a stochastic process is a collection of random variables indexed by a set of numbers. We view the stock market orders for a given chunk of time of day ∆t as a collection of vector valued random variable {x i} i∈N indexed by the limit order sequence number in N = {1, . . ., n}. The components of the random vector x i include the time interval d i, type of order t i, limit order price p i, limit order quantity q i, and the best bid a i and best ask b i. The time interval d i specifies the difference in time between the current order i and previous order i − 1 (in precision of milliseconds); the range of d i is finite. The type of order can be buy, sell, cancel buy, or cancel sell (represented in two bits). The price and quantity are restricted to lie within finite bounds. The price range is discretized in units of US cents and the quantity range is discretized in units of the equity (non-negative integers). The best bid and best ask are limit orders themselves and are specified by price and quantity. Observe that we assume the stochastic process depends on the discrete time of day ∆t, which we will make explicit in the next paragraph. We divide the time in a day into 25 equal intervals and ∆t refers to the index of the interval. A visual representation of x i is shown in FIG1 (a).Following the terminology prevalent for stochastic processes, the above process is discrete time and discrete space (note that discrete time in this terminology here refers to the discreteness of the index set N). We assume a finite history dependence of the current output x i, that is, P (x i | x i−1, . . ., ∆t) = P (x i | x i−1, . . ., x i−m, ∆t). Such dependence is justified by the observation that recent orders mostly determine the transactions and transaction price in the market as orders that have been in the market for long either get transacted or canceled. Further, the best bid and best ask serves as an (approximate) sufficient statistic for events beyond the history length m. While this process is not a Markov chain, it forms what is known as a higher order Markov chain, which implies that the process given by y i = (x i, . . ., x i−m+1) is a Markov chain for any given time interval ∆t. We assume that this chain formed by y i has a stationary distribution (i.e., it is irreducible and positive recurrent). A Markov chain is a stationary stochastic process if it starts with its stationary distribution. After some initial mixing time, the Markov chain does reach its stationary distribution, thus, we assume that the process is stationary by throwing away some initial data for the day. Also, for the jumps across two time intervals ∆t, we assume the change in stationary distribution is small and hence the mixing happens very quickly. A stationary process means that P (x i, . . ., x i−m+1 | ∆t) has the same distribution for any i. In practice we do not know m. However, we can assume a larger history length k + 1 > m, and then it is straightforward to check that y t = (x i, . . ., x i−k) is a Markov chain and the claims above hold with m − 1 replaced by k. We choose k = 20. Given the above stochastic process view of the problem, we design a conditional WGAN with a recurrent architecture to learn the real conditional distribution P r (x i | x i−1, . . ., x i−k, ∆t). We use the subscript r to refer to real distributions and the subscript g to refer to generated distributions. The real data x 1, x 2,... is a realization of the stochastic process. It is worth noting that even though P (x i, . . ., x i−k | ∆t) has the same distribution for any i, the realized real data sequence x i,... x i−k is correlated with any overlapping sequnce x i+k,... x i−k+k for k ≥ k ≥ −k. Our data points for training (stated in detail in the next paragraph) are sequences x i,... x i−k, and to ensure independence in a batch we make sure that the sequences chosen in a batch are sufficiently far apart. The architecture is shown in FIG1. Our proposed WGAN is conditional BID11 with both the generator and critic conditioned on a k length history and the time interval ∆t. The history is condensed to one vector using a single LSTM layer. This vector and some uniform noise is fed to a fully connected layer layer followed by a convolutional structure. The generator outputs the next x i (realization of x i) and the critic outputs one real number. Note that when training both generator and critic are fed history from real data, but when the generator executes after training it is fed its own generated data as history. As stated earlier, the generator output includes the best bid and ask. As the best bid and ask can be inferred deterministically from the current order and the previous best bid and ask (for most orders), we use another neural network (with frozen weights during GAN training) to output the best bid and ask. We call this the CDA network. The CDA network is trained separately using a standard MSE loss (see Appendix C).Critic details: When fed real data, the critic can be seen as a function c w of s i = (x i, . . ., x i−k, ∆t), where w are the weights of the critic network. As argued earlier, samples in a batch that are chosen from real data that are spaced at least k apart are i.i.d. samples of P r. Then for m samples fed to the critic, DISPLAYFORM0 When fed generated data (with the ten price levels determined from the output order and previous ten levels), by similar reasoning 1 m m i=1 c w (s i) estimates E s∼Pg (c w (s)) when the samples are sufficiently apart (recall that the history is always real data). Thus, the critic computes the Wasserstein distance between the joint distributions P r (x i, . . ., x i−k, ∆t) and P g (x i, . . ., x i−k, ∆t). Further, we use a gradient penalty term in the loss function for the critic instead of clipping weights as proposed in the original WGAN paper because of the better performance as revealed in prior work BID8.Generator details: The generator learns the conditional distribution P g (x i | x i−1, . . ., x i−k, ∆t). Along with the real history, the generator represents the distribution P g (x i, . . ., DISPLAYFORM1 The loss functions used is the standard WGAN loss function with a gradient penalty term BID8). The critic is trained 100 times in each iteration and as already stated, the notable part in constructing the training data is that for each mini-batch the sequence of orders chosen (including history) is far away from any other sequence in that mini-batch (see Appendix C for code snippets). We apply and evaluate Stock-GAN on two types of data sets composed of orders from an agentbased market simulator and from a real stock market, respectively. We describe each data set in detail and then compare key metrics and distributions of our generated orders with ground truth orders from the agent-based simulator and real stock markets. Synthetic data: We first evaluate Stock-GAN on synthetic orders generated from an agent-based market simulator. Previously adopted to study a variety of issues in financial markets (e.g., market making and manipulation), the simulator captures stylized facts of the complex financial market with specified stochastic processes and distributions BID14. We briefly describe the market simulator below. In the simulation, the market operates over a finite time horizon. Agents enter and reenter the market according to a Poisson process with an arrival rate of 0.005. On each arrival these traders submit a limit order to the market (replacing their previous order, if any), indicating the price at which they are willing to buy or sell a single unit of the security. The market environment is populated by 32 traders, representing investors. Each investor has an individual valuation for the security made up of private and common components. The common component is represented by a fundamental value, which can be viewed as the intrinsic value of the security. This fundamental value varies over time according to a mean-reverting stochastic process. The private component of value captures the preference contribution of the individual agent's reason for trading this security at the current time (e.g., investment, liquidity, diversification). The private valuations are drawn from a specified distribution at the start of a simulation. The common and private components are effectively added together to determine each agents valuation of the security. Agents accrue private value on each transaction, and at the end of the trading horizon evaluate their accumulated inventory on the basis of a prediction of the end-time fundamental. Given the market mechanism and valuation model for the simulation, investors pursue their trading objectives by executing a trading strategy in that environment. A popular trading strategy we adopt in the simulator is the zero-intelligence (ZI) strategy BID5. The ZI trader shades its bid from its current valuation of the stock by a random offset. We use about 300,000 orders generated by the simulator as our synthetic data. The price output by the simulator is normalized to lie in the interval [−1, 1].Real data: We obtained real limit-order streams from OneMarketData, who provided access for our research to their OneTick database for selected time periods and securities. The provided data streams comprise order submissions and cancellations across multiple exchanges at millisecond granularity. In experiments, we evaluate in the performance of Stock-GAN on two securities: a small capitalization stock, Patriot National (PN), and a large capitalization stock, Alphabet Inc (GOOG). The two stocks differ in several key aspects, including investment sector, market activity intensity, price range, liquidity etc., and thus their order patterns represent distinct dynamic processes. By training Stock-GAN with historical data for individual stocks, we can generate limit-order streams that capture key characteristics of each. Relative to our simulated agent-based market, the real market limit orders tend be very noisy including many orders at extreme prices far from the range where transactions occur. Since our interest is primarily on behavior that can affect market outcomes, we focus on limit orders in the relevant range near the best bid and ask. Specifically, in a preprocessing step, we eliminate limit orders that never appear within ten levels of the best bid and ask prices. In the experiment reported here, we use historical real market data of PN during one trading day in August 2016, and GOOG during one trading day in August 2017. After preprocessing, the PN daily order stream has about 20,000 orders and GOOG has about 230,000. We generate a number of orders equal to the number of real orders used to train the WGAN. We evaluate our generated order stream in comparison to real data using the following statistics:1. Price. Distribution over price for the day's limit orders, by order type.2. Quantity. Distribution over quantity for the day's limit orders, by order type.3. Inter-arrival time. Distribution over inter-arrival durations for the day's limit orders, by order type.4. Intensity evolution. Number of orders for consecutive 1000-second chunks of time.5. Best bid/ask evolution. Changes in the best bid and ask over time as new orders arrive. A note on cancellation: In our generation process, cancellation type orders are not contingent on the order book. We use a heuristic which is to match the generated cancellation order to the closest priced order in the book. Cancellations that are too far from any existing order to be a plausible match are ignored. In describing our , "real" refers to simulated or actual stock market data and "fake" refers to generated data. FIG3 presents statistics on buy orders for the three cases when the real data is simulated, PN, or GOOG. For simulated data, the price and inter-arrival distribution matches the real distribution quite closely. The quantity for the simulated data is always one, which is also trivially captured in the generated data. For PN and GOOG, the quantity distribution misses out on some peaks but gets most of the peaks in the real distribution. The inter-arrival time distribution matches quite closely (note that the axis has been scaled for inter-arrival time to highlight the peaks and show the full range of time). The price distribution matches closely for GOOG, but is slightly off for PN, which could be due to the low amount of data for PN. differences are particularly large for PN, likely due to the relatively smaller magnitude of trading volume for that stock. In FIG8, we show the change in best buy/ask as a function of time for the simulated, PN, and GOOG markets. The generated looks similar to real data in range and variation over time for simulated data. The similarity to real best bid/ask is better for GOOG than PN, which could possibly be due to more data available for GOOG.Quantitative measures: The figures till now show that the price distribution appears like a normal distribution and the inter-arrival time appears like a geometric distribution (geometric is discrete version of exponential). We fit these standard distributions to the real price and inter-arrival distribution and compare the total variation (TV) distance between the real and fitted vs real and generated Table 1: TV distance comparisons between fitted and generated distribution. IA means inter-arrival.distributions. The quantity distribution does not appear like any standard distribution, hence we do not evaluate it by fitting. The in Table 1 show that the generated price distribution is almost as close to the real one as the fitted price distribution. The generated inter-arrival distribution is much closer to the real one than the fitted price distribution. A point to note is that the actual price and quantity is a stochastic process with dependence on history, thus, the fitted distributions will not be helpful in generating the correct intensities or best bid and best ask evolution. A note on architectural choices: Various parts of our architecture were developed iteratively to improve the that we obtained in a previous iteration. The input of ∆t to the generator and critic is critical to get the time trend in the intensity for the GOOG stock. The CDA network and the best bid and ask in history was added to improve the for best bid/ask variation over time. Comparision with baseline: We also implemented a variational recurrent generative network but found its performance to be worse than our approach (shown in Appendix B). Our reveal that GANs can be used to simulate a stock market. While our are promising, there are open issues that provide for further research material. One experimental aspect is to try different size of the network in the WGAN, possibly dependent on the data size of the given stock and testing with many different variety of stocks. Another open research issue is to output cancellations in a more intelligent manner than the heuristic approach we use now. Overall, our work provides fertile ground for future research at the intersection of deep learning and finance. Below we show for buy order cancellation and sell order cancellation using the exact same measures as for the buy and sell orders in the main paper. The also are similar to buy or sell earlier. Figure 9: Simulated, PN, and GOOG submitted buy-order statistics using recurrent VAE.(a) GOOG intensity plot FIG0: Intensity of market activities for GOOG using recurrent VAE. We use the variational recurrent network as another baseline generative model. The architecture is exactly same as the work BID4. We used the code available at https: //github.com/phreeza/tensorflow-vrnn, but modified it. Our modification was to enable not forcing the output to be Gaussian as done in BID4, as those produced much worse . Instead, we use a MSE loss. We also modified the input size, etc. to make the neural network structure compatible with our problem. The exact change to the code changing the loss function is shown below:kl loss = tf kl gaussgauss (enc mu, enc sigma, prior mu, prior sigma) # we replace the maximium likelihood loss with the mse loss below mse loss = tf. losses. mean squared error (y, dec rho) return tf. reduce mean (kl loss + mse loss)The in Figure 9 for GOOG buy order only and in FIG0 for all types of GOOG orders shows that the entropy of the output is high (when comparing price and inter-arrival distributions) and the performance is worse than our GAN. In particular, the generated (fake) price distribution is wider than the real one (or the one generated by the GAN). The generated inter-arrival distribution is almost uniform over the discrete time points and not concentrated at 0. The quantity distribution matches the real one, somewhat similarly like our GAN approach, but it generates some negative values unlike our GAN approach (which could be discarded). The intensity distribution is also somewhat close to the real intensity. The are similar for other types of orders.self.net = Model (inputs = input his, outputs = output vec) optimizer = Adam (0.0001) self.net. compile (optimizer =optimizer, loss=' mean squared error ') self.net. summary Input LSTM structure for both Generator and Critic are shown below # ########## Input for both Generator and Critic ####################### # history orders of shape (self. historyLength, self. orderLength) history = Input (shape =( self. historyLength, self. orderLength), \ name=' history full') # current time slot: Integer, from 0 to 23 history input = Input (shape =(1,), name=' history time') # noise input of shape (self. noiseLength) noise input 1 = Input (shape =( self. noiseLength,), name=' noise input 1') # Real order of shape (( self. mini batch size,self. orderLength) truth input = Input (shape =( self. mini batch size,\ self. orderLength,1), name=' truth input') # lstm at Generator to extract history orders features lstm output = LSTM(self. lstm out length)(history) # lstm at Critic to extract history orders features lstm output h = LSTM(self. lstm out length,name=' lstm critic ')(history)# concatenate history features with noise gen input = Concatenate (axis=−1)([ history input, lstm output, noise input 1])The Generator structure is shown below, which includes the trained CDA network # ############ Generator ######################## # Input: gen input, shape (self. noiseLength +self. lstm out length + 1) | We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks. | 1,268 | scitldr |
We present a novel black-box adversarial attack algorithm with state-of-the-art model evasion rates for query efficiency under $\ell_\infty$ and $\ell_2$ metrics. It exploits a \textit{sign-based}, rather than magnitude-based, gradient estimation approach that shifts the gradient estimation from continuous to binary black-box optimization. It adaptively constructs queries to estimate the gradient, one query relying upon the previous, rather than re-estimating the gradient each step with random query construction. Its reliance on sign bits yields a smaller memory footprint and it requires neither hyperparameter tuning or dimensionality reduction. Further, its theoretical performance is guaranteed and it can characterize adversarial subspaces better than white-box gradient-aligned subspaces. On two public black-box attack challenges and a model robustly trained against transfer attacks, the algorithm's evasion rates surpass all submitted attacks. For a suite of published models, the algorithm is $3.8\times$ less failure-prone while spending $2.5\times$ fewer queries versus the best combination of state of art algorithms. For example, it evades a standard MNIST model using just $12$ queries on average. Similar performance is observed on a standard IMAGENET model with an average of $579$ queries. Problem. Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are malicious inputs designed to fool the model's prediction-see for a comprehensive, recent overview of adversarial examples. Research on generating these malicious inputs started in the white-box setting, where access to the gradients of the models is assumed. Since the gradient points to the direction of steepest ascent, an input can be perturbed along the gradient's direction to maximize the network's loss, thereby potentially causing misclassification under class prediction, e.g. with images, or evasion under detection, e.g. with malware. The assumption of access to the underlying gradient does not however reflect real world scenarios. Attack algorithms under a more realistic, restrictive black-box threat model, which assumes access to predictions in lieu of gradients, are therefore studied. Central to their approaches is estimating the gradient. To estimate the magnitudes and signs of the gradient, the community at large has formulated a continuous optimization problem of O(n) complexity where n is the input dimensionality. Most recently work has sought to reduce this complexity by means of data-/time-dependent priors. In this paper, we take a different tact and reduce the central problem to just estimating the signs of the gradients. Our intuition arises from observing that estimating the sign of the top 30% gradient coordinates by magnitude is enough to achieve a rough misclassification rate of 70%. Figure 1 reproducing illustrates this observation for the MNIST dataset-see Appendix A for other datasets. Therefore our goal is to recover the sign of the gradient with high query efficiency so we can use it to generate adversarial examples as effective as those generated by full gradient estimation approaches. Related Work. We organize the related work in two themes, namely Adversarial Example Generation and Sign-Based Optimization. The literature of the first theme primarily divides into white-box and black-box settings. The white-box setting, while not the focus of this work, follows from the works of and who introduced the Fast Gradient Sign Method (FGSM), including several methods to produce adversarial examples for various learning tasks and threat perturbation constraints (; ; ; ; ;). Turning to the blackbox setting and iterative optimization schemes, , without using any gradient information, use a naive policy of perturbing random segments of an image to generate adversarial examples. reduce the dimensions of the feature space using Principal Component Analysis (PCA) and random feature grouping, before estimating gradients. introduce a principled approach by using gradient based optimization. They employ finite differences, a zeroth-order optimization means, to estimate the gradient and then use it to design a gradient-based attack. While this approach successfully generates adversarial examples, it is expensive in how many times the model is queried. substitute traditional finite differences methods with Natural Evolutionary Strategies (NES) to obtain an estimate of the gradient. provide an adaptive random gradient estimation algorithm that balances query counts and distortion, and introduces a trained auto-encoder to achieve attack acceleration. extend this line of work by proposing the idea of gradient priors and bandits: Bandits T D. Our work contrasts with the general approach of these works in two ways: a) We focus on estimating the sign of the gradient and investigate whether this estimation suffices to efficiently generate adversarial examples. b) The above methods employ random sampling in constructing queries to the model while our construction is adaptive. 1 Another approach involves learning adversarial examples for one model (with access to its gradient information) to transfer them against another . use a Generative Adversarial Network (GAN) to generate adversarial examples which are based on small norm-bounded perturbations. These methods involve learning on a different model, which is expensive, and not amenable to comparison with setups-including ours-that directly query the model of interest. Figure 1: Misclassification rate of an MNIST model on the noisy FGSM's adversarial examples as a function of correctly estimated coordinates of sign(∇ x f (x, y)) on 1000 random MNIST images. Estimating the sign of the top 30% gradient coordinates (in terms of their magnitudes) is enough to achieve a rough misclassification rate of 70%. More details can be found in Appendix A. Sign-Based Optimization. In the context of generalpurpose continuous optimization methods, signbased stochastic gradient descent was studied in both zeroth-and first-order setups. In the latter, analyzed signSGD, a sign-based Stochastic Gradient Descent, and showed that it enjoys a faster empirical convergence than SGD in addition to the cost reduction of communicating gradients across multiple workers. extended signSGD to zeroth-order setup with the ZO-SignSGD algorithm. ZO-SignSGD was shown to outperform NES against a blackbox model on MNIST. These approaches use the sign of the gradient (or its zero-order estimate) to achieve better convergence, whereas our approach both estimates and uses the sign of the gradient. Contributions. We present the following contributions at the intersection of adversarial machine learning and black-box (zeroth-order) optimization: 1) We exploit the separability property of the directional derivative of the loss function of the model under attack in the direction of {±1} n vectors, to propose a divide-and-conquer, adaptive, memory-efficient algorithm, we name SignHunter, to estimate the gradient sign bits. 2) We provide a worst-case theoretical guarantee on the number of queries required by SignHunter to perform at least as well as FGSM , which has access to the model's gradient. To our knowledge, no black-box attack from the literature offers a similar performance guarantee. 3) We evaluate our approach on a rigorous set of experiments on both, standard and adversarially hardened models. All other previous works on this topic have published their on a subset of the datasets and threat models we experimentally validate in this work. Through these experiments, we demonstrate that SignHunter's adaptive search for the gradient sign allows it to craft adversarial examples within a mere fraction of the theoretical number of queries thus outperforming FGSM and state-of-the-art black-box attacks. 4) We release a software framework to systematically benchmark adversarial black-box attacks, including SignHunter's, on MNIST, CIFAR10, and IMAGENET models in terms of success rate, query count, and other metrics. 5) We demonstrate how SignHunter can be used to characterize adversarial cones in a black-box setup and in doing so, highlight the gradient masking effect. Notation. Let n denote the dimension of datapoint x. Denote a hidden n-dimensional binary code by q *. That is, q * ∈ H ≡ {−1, +1} n. Further, denote the directional derivative of some function f at a point x in the direction of a vector v by D v f (x) ≡ v T ∇ x f (x) which often can be approximated by the finite difference method. That is, for δ > 0, we have Let Π S (·) be the projection operator onto the set S, B p (x,) be the p ball of radius around x. At the heart of black-box adversarial attacks is generating a perturbation vector to slightly modify the original input x so as to fool the network prediction of its true label y. Put differently, an adversarial example x maximizes the network's loss L(x, y) but still remains -close to the original input x. Although the loss function L can be non-concave, gradient-based techniques are often very successful in crafting an adversarial example. That is, setting the perturbation vector as a step in the direction of ∇ x L(x, y). Consequently, the bulk of black-box attack methods try to estimate the gradient by querying an oracle that returns, for a given input/label pair (x, y), the value of the network's loss L(x, y), consulting prediction or classification accuracy. Using only such value queries, the basic approach relies on the finite difference method to approximate the directional derivative (Eq. 1) of the function L at the input/label pair (x, y) in the direction of a vector v, which, one can construct a linear system of equations to recover the full gradient. Clearly, this approach's query complexity is O(n), which can be prohibitively expensive for large n (e.g., n = 268, 203 for the IMAGENET dataset). Recent works try to mitigate this issue by exploiting data-and/or time-dependent priors (; ; 2019). However, the queries are not adaptive, they are constructed based on i.i.d. random vectors {v i}. They fail to make use of the past queries' responses to construct the new query and recover the full gradient more efficiently. As stated in the introduction, we solve the smaller problem of gradient sign estimation with adaptive queries based on the observation that simply leveraging (noisy) sign bits of the gradient yields successful attacks-see Figure 1. Definition 1. (Gradient Sign Estimation Problem) For an input/label pair (x, y) and a loss function L, let g * = ∇ x L(x, y) be the gradient of L at (x, y) and q * = sign(g *) ∈ H be the sign bit vector of g *. 2 Then the goal of the gradient sign estimation problem is to find a binary vector q ∈ H maximizing the directional derivative from a limited number of (possibly adaptive) function value queries L(x, y). Our goal is to estimate the gradient sign bits of the loss function L of the model under attack at an input/label pair (x, y) from a limited number of loss value adaptive queries L(x, y). To this end, we examine the basic concept of directional derivatives that has been employed in recent black-box 2 Without loss of generality, we encode the sign bit vector in H ≡ {−1, +1} n rather than {0, 1} n. This is a common representation in sign-related literature. Note that the standard sign function has the range of {−1, 0, +1}. Here, we use the non-standard definition whose range is {−1, +1}. This follows from the observation that DNNs' gradients with respect to their inputs are not sparse (, Appendix B.1). 3 The maximization follows from DqL(x, y) = q T g *, which is maximized when q = q * = sign(g *). adversarial attacks. Based on the definition of the directional derivative (Eq. 1), the following can be stated. of the loss function L at an input/label pair (x, y) in the direction of a binary code q is separable. That is, Algorithm 1 SignHunter g: H → R: the black-box function to be maximized over the binary hypercube return done This reformulates the gradient sign estimation problem from single n-dimensional to n 1-dimensional binary black-box optimization problems, reducing the search space of sign bits from 2 n to 2n. Subsequently, one could recover the gradient sign bits with n + 2 queries as follows: i. Start with an arbitrary sign vector q and compute the directional derivative D q L(x, y). Using Eq. 1, this requires two queries: L(x+δq, y) and L(x, y). ii. For the remaining n queries, flip q's bits (coordinates) one by one and compute the corresponding directional derivativeone query each L(x + δq, y). iii. Retain bit flips that maximize the directional derivative D q L(x, y) and revert those otherwise. This, however, still suffers from the O(n) complexity of full gradient estimation methods. Further, each query recovers at most one sign bit and the natural question to ask is: can we recover more sign bits per query? Consider the case where all the gradient coordinates have the same magnitude, i.e., |{|g * i |} 1≤i≤n |= 1, and let the initial guess q 1 have r correct bits and n − r wrong ones. Instead of flipping its bits sequentially, we can flip them all at once to get, then we retain q 2 as our best guess with n − r correct bits, otherwise q 1 remains. In either cases, with three queries, we will recover max(r, n − r) sign bits. One can think of this flip/revert procedure as one of majority voting by the guess's coordinates on whether they agree with their gradient sign's counterparts. To see this, let |g * i |= 1 for all i, then the condition D q2 L(x, y) ≥ D q1 L(x, y) can be written as n − r − r ≥ r − n + r =⇒ n ≥ 2r. If the agree votes r are less than half of the total votes n, then q 2 is retained. Besides flipping all the coordinates, one can employ the same procedure iteratively on a subset (chunk) of the coordinates [q j, . . ., q j+ni] of the guess vector q, recovering max(r i, n i − r i) sign bits, where n i and r i is the length of the ith chunk and the number of its correct signs, respectively. While the magnitudes of gradient coordinates may not have the same value as assumed in the previous example; through empirical evaluation (see Appendix F), we found them to be concentrated. Consequently and with high probability, their votes on retaining or reverting chunks of sign flips are weighted (by their corresponding gradient magnitude) similarly. That said, if we are at a chunk where the distribution of the gradient coordinate magnitudes is uniform, then the flip/revert procedure could favor recovering few sign coordinates with large magnitude counterparts over many sign coordinates with small magnitude counterparts. From our experiments on the noisy FGSM, this still suffices to generate adversarial examples: an attack with 30% correct sign bits (that correspond to the top gradient coordinates magnitudes) is more effective than an attack with 50% correct arbitrary sign bits as shown in Figure 1. Put differently, we would like to recover as many sign bits as possible with as few queries as possible. However, if we can only recover few, they should be those that correspond to coordinates with large gradient magnitude. This notion is in line with the flip/revert procedure. We employ the above observation in a divide-and-conquer search which we refer to as SignHunter. As outlined in Algorithm 1, the technique starts with an initial guess of the sign vector q 1 (s in Algorithm 1). It then proceeds to flip the sign of all the coordinates to get a new sign vector q 2, and revert the flips if the loss oracle returned a value L(x + δq 2, y) (or equivalently the directional derivative) less than the best obtained so far L(x + δq 1, y). SignHunter applies the same rule to the first half of the coordinates, the second half, the first quadrant, the second quadrant, and so on. For a search space of dimension n, SignHunter needs 2 log(n)+1 − 1 sign flips to complete its search. If the query budget is not exhausted by then, one can update x with the recovered signs and restart the procedure at the updated point with a new starting code s. If we start with a sign vector whose Hamming distance to the optimal sign vector q * is n/2: agreeing with q * in the first half of coordinates. In this case, SignHunter needs just four queries to recover the entire sign vector independent of n, whereas the sequential bit flipping still require n + 2 queries. In the next theorem, we show that SignHunter is guaranteed to perform at least as well as FGSM with O(n) oracle queries. Up to our knowledge, no such guarantees exist for any black-box attack from the literature. Theorem 1. (Optimality of SignHunter) Given 2 log(n)+1 queries and that the directional derivative is well approximated by the finite-difference (Eq. 1), SignHunter is at least as effective as FGSM in crafting adversarial examples. The proof can be found in Appendix B. Theorem 1 provides an upper bound on the number of queries required for SignHunter to recover the gradient sign bits, and perform as well as FGSM. In practice (as will be shown in our experiments), SignHunter crafts adversarial examples with a small fraction of this upper bound. The rationale here is that we do not need to recover the sign bits exactly; we rather need a fast convergence to an adversarially helpful sign vector s. In our setup, we use the best sign estimation obtained s so far in a similar fashion to FGSM, whereas full-gradient estimation approaches often employ an iterative scheme of T steps within the perturbation ball B p (x,), calling the gradient estimation routine in every step leading to a search complexity of nT. Instead, our gradient sign estimation routine runs at the top level of our adversarial example generation procedure. Further, SignHunter is amenable to parallel hardware architecture and has a smaller memory footprint (just sign bits) and thus can carry out attacks in batches more efficiently. Crafting black-box adversarial attacks with SignHunter is outlined in Algorithm 2. Algorithm 2 Black-Box Adversarial Example Generation with SignHunter x init: input to be perturbed, yinit: xinit's true label, Bp(.,): p perturbation ball of radius L: loss function of the model under attack 1: δ ← // set finite-difference probe to perturbation bound 2: x o ← x init 3: Define the function g as SignHunter.step s ← SignHunter.get_current_sign_estimate 9: if SignHunter.is_done then 11: Define the function g as in Line 3 (with x o update) SignHunter.init(g) 14: return x We evaluate SignHunter and compare it with established algorithms from the literature: , , and Bandits T D in terms of effectiveness in crafting (without loss of generality) untargeted black-box adversarial examples. To highlight SignHunter's adaptive query construction, we introduce a variant of Algorithm 2, named Rand. At every iteration, Rand's sign vector is sampled uniformly from H. 4. Both ∞ and 2 threat models are considered on the MNIST, CIFAR10, and IMAGENET datasets. Code and data for the experiments can be found at https://bit.ly/3acIHoQ. Experiments Setup. Our experiment setup is similar to . Each attacker is given a budget of 10, 000 oracle queries per attack attempt and is evaluated on 1000 images from the test sets of MNIST, CIFAR10, and the validation set of IMAGENET. We did not find a standard practice for setting the perturbation bound. Figure 2: Performance of black-box attacks in the ∞ and 2 perturbation constraint. The plots show the average number of queries used per successful image for each attack when reaching a specified success rate. 2019), which are smaller than the one used in . We use the observed bound in for CIFAR10. We show based on standard models-i.e., models that are not adversarially hardened. For MNIST and CIFAR10, the naturally trained models from (Hyperparameters Setup. While SignHunter does not have any hyperparameters, to fairly compare it with the other algorithms, we tuned their hyperparameters starting with the default values reported by the corresponding authors. The finite difference probe δ for SignHunter is set to the perturbation bound as it is used for both computing the finite difference and crafting the adversarial examplessee Line 1 in Algorithm 2. This tuning-free aspect of SignHunter offers a robustness advantage over algorithms which require expert hypertuning. Details on the hyperparameter setup are available in Appendix C. Results. Figure 2 shows the trade-off between the success (evasion) rate and the mean number of queries (of the successful attacks, per convention) needed to generate an adversarial example for the MNIST, CIFAR10, and IMAGENET classifiers under the ∞ and 2 perturbation constraints. These plots indicate the average number of queries required for a desired success rate. Table 1 represents a tabulated summary of plots (b) and (e) of Figure 2. 5 We observe the following: For any given success rate, SignHunter dominates the previous state of the art approaches in all settings except the IMAGENET 2 setup, where Bandits T D shows a better query efficiency when the desired success rate is roughly greater than 0.35. This is all the more remarkable because Bandits T D exploits tiles, a data-dependent prior, searching over 50 × 50 × 3 dimensions for IMAGENET, while SignHunter searches over the explicit data 299 × 299 × 3 dimensions: 36× more dimensions. ∞ vs. 2 Perturbation Threat. In view of Bandits T D's advantage, SignHunter is remarkably efficient in the ∞ setup, achieving a 100% evasion using-on average-just 12 queries per image against the MNIST classifier! In the 2 setup, SignHunter's performance degrades-yet it still outperforms the other algorithms. This is expected, since SignHunter perturbs all the coordinates with the same magnitude and the 2 perturbation bound 2 for all the datasets in our experiments is set such that 2 / √ n is significantly less than the ∞ perturbation bound ∞. Take the case of MNIST (n = 28 × 28), where ∞ = 0.3 and 2 = 3. For SignHunter, the 2 setup is equivalent to an ∞ perturbation bound of 3/28 ≈ 0.1. The employed 2 perturbation bounds give the state of the art-continuous optimization based-approaches more perturbation options. For instance, it is possible for NES to perturb just one pixel in an MNIST image by a magnitude of 3; two pixels by a magnitude of 3/ √ 2 ≈ 2.1 each; ten pixels by a magnitude of 3/ √ 10 ≈ 0.9 each, etc. On the other hand, the binary optimization view of SignHunter limits it to always perturb all 28 × 28 pixels by a magnitude of 3/28 ≈ 0.1. Despite its fewer degrees of freedom, SignHunter maintains its effectiveness in the 2 setup. The plots can also be interpreted as a sensitivity assessment of SignHunter as gets smaller going from ∞ to the 2 perturbation threat. SignHunter vs FGSM. The performance of SignHunter is in line with Theorem 1 when compared with the performance of FGSM (the noisy FGSM at k = 100% in Figures 1 and 2 of Appendix A) in both ∞ and 2 setups across all datasets. For instance, FGSM has a failure rate of 0.32 for CIFAR10 2 (Appendix A, Figure 2 (b)), while SignHunter achieves a failure rate of 0.21 with 692.39 < 2n = 2 × 3 × 32 × 32 = 6144 queries (Appendix D, Table 8). Note that for IMAGENET, SignHunter outperforms FGSM with a query budget of 10, 000 queries, a fraction of the theoretical number of queries required 2n = 536, 406 to perform at least as well. Incorporating SignHunter in an iterative framework of perturbing the data point x till the query budget is exhausted (Lines 10 to 14 in Algorithm 2) supports the observation in white-box settings that iterative FGSM-or Projected Gradient Descent (PGD)-is stronger than FGSM . This is evident by the upticks in SignHunter's performance on the MNIST 2 case (Appendix D, Figure 4), which happens after every iteration (after every other 2 × 28 × 28 queries). Gradient Estimation. Plots of the Hamming similarity capture the number of recovered sign bits, while plots of the average cosine similarity capture the value of Eq. 2. Both SignHunter and Bandits T D consistently optimize both metrics. In general, SignHunter (Bandits T D) converges faster especially on the Hamming(cosine) metric as it is estimating the signs(signs and magnitudes) compared to Bandits T D's full gradient (SignHunter's gradient sign) estimation. This is most obvious in the IMAGENET 2 setup (Appendix D, Figure 6). Note that once an attack is successful, the estimated gradient sign at that point is used for the rest of the plot. This explains why, in the ∞ settings, SignHunter's plot does not improve compared to its 2 counterpart, as most of the attacks are successful in the very first few queries made to the loss oracle and no further refined estimation is required. Another possible reason is that the gradient direction can be very local and does not capture the global loss landscape compared to SignHunter's estimation. More on this is discussed in Section 6. SignHunter vs. Rand. Given these , one could argue that SignHunter is effective, because it maximally perturbs datapoints to the vertices of their perturbation balls. 6 However, Rand's poor performance does not support this argument and highlights the effectiveness of SignHunter's adaptive query construction. Except for MNIST and CIFAR10 ∞ settings, Rand performs worse than the full-gradient estimation approaches, although it perturbs datapoints similar to SignHunter. Overall, SignHunter is 3.8× less failure-prone than the state-of-the-art approaches combined, and spends over all the images (successful and unsuccessful attacks) 2.5× less queries. To complement Section 4, we evaluate SignHunter against adversarial training, a way to improve the robustness of DNNs . Specifically, we attacked the secret models used 6 We define perturbation vertices as extreme points of the ball Bp(x,). That is, x ± ∞, where ∞ = when p = ∞ and ∞ = / √ n when p = 2. 7 The number of queries spent is computed based on Tables 7-9 of Appendix D as (1 -fail_rate) * avg_#_queries + fail_rate * 10,000. in public challenges for MNIST and CIFAR10. For IMAGENET, we used ensemble adversarial training, a method that argues security against black-box attacks based on transferability Tramèr et al. (2017a). Appendix E reports the same metrics used in Section 4 as well as a tabulated summary for the discussed below. Public MNIST Black-Box Attack Challenge. In line with the challenge setup, 10, 000 test images were used with an ∞ perturbation bound of = 0.3. Although the secret model is released, we treated it as a black box similar to our experiments in Section 4. No maximum query budget was specified, so we set it to 5, 000 queries. This is equal to the number of iterations given to a PGD attack in the white-box setup of the challenge: 100-steps with 50 random restarts. SignHunter's attacks ed in the lowest model accuracy of 91.47%, outperforming all the submitted attacks to the challenge, with an average number of queries of 233 per successful attack. Note that the attacks submitted to the challenge are based on transferability and do not query the model of interest. On the other hand, the most powerful white-box attack by. The Gradient-Aligned Adversarial Subspace (GAAS) method Tramèr et al. (2017b) provides an approximation of the adversarial cone dimensionality by finding a set of orthogonal perturbations of norm that are all adversarial with respect to the model. By linearizing the model's loss function, this is reduced to finding orthogonal vectors that are maximally aligned with its gradient g * -or its gradient sign q * in the ∞ setup Tramèr et al. (2017a). In Figure 3, we reproduce (Tramèr et al., 2017a, Fig. 2) and show that aligning the orthogonal vectors with SignHunter's estimation (we refer to this approach as SAAS) instead of aligning them with the gradient (GAAS) in a better approximation of the adversarial cone for the two IMAGENET models considered earlier, even when the number of queries given to SignHunter is just a fraction of the dimensionality n. Through its query-efficient finite-difference sign estimation, SignHunter is able to quickly capture the larger-scale variation of the loss landscape in the point's neighborhood, rather than the infinitesimal point-wise variation that the gradient provides, which can be very local. This is important in adversarial settings, where the loss landscape is analyzed in the vicinity of the point; Tramèr et al. (2017a). One interesting observation at k = 1 (note here, r 1 = q *) across all is that GAAS finds adversarial directions for fewer points against the v3 adv-ens4 model than the naturally trained model v3, whereas SAAS reports similar probability of adversarial directions for both. This contrast suggests that ensemble adversarial training Tramèr et al. (2017a) still exhibits the gradient masking effect, where the gradient poorly approximates the global loss landscape. Assuming a black-box threat model, we studied the problem of generating adversarial examples for neural nets and proposed the gradient sign estimation problem as the core challenge in crafting (Tramèr et al., 2017a, Figure 2), for 500 correctly classified points x and ∈ {4, 10, 16}, we plot the probability that we find at least k orthogonal vectors r i -computed based on (Tramèr et al., 2017a, Lemma 7)-such that ||r i || ∞ = and x + r i is misclassified. For both models and for the same points x, SAAS finds more orthogonal adversarial vectors r i than GAAS, thereby providing a better characterization of the space of adversarial examples in the vicinity of a point, albeit without a white-box access to the models. these examples. We formulate the problem as a binary black-box optimization one: maximizing the directional derivative in the direction of {±1} n vectors, approximated by the finite difference of the queries' loss values. The separability property of the directional derivative helped us devise SignHunter, a query-efficient, tuning-free divide-and-conquer algorithm with a small memory footprint that is guaranteed to perform at least as well as FGSM after O(n) queries. No similar guarantee is found in the literature. In practice, SignHunter needs a mere fraction of this number of queries to craft adversarial examples. The algorithm is one of its kind to construct adaptive queries instead of queries that are based on i.i.d. random vectors. Robust to gradient masking, SignHunter can also be used to estimate the dimensionality of adversarial cones. Moreover, SignHunter achieves the highest evasion rate on two public black-box attack challenges and breaks a model that argues robustness against substitute-model attacks. This section shows the performance of the noisy FGSM on standard models (described in Section 1 of the main paper) on the MNIST, CIFAR10 and IMAGENET datasets. In Figure 4, we consider the ∞ threat perturbation constraint. Figure 5 reports the performance for the 2 setup. Similar to , for each k in the experiment, the top k percent of the signs of the coordinates-chosen either randomly (random-k) or by the corresponding magnitude |∂L(x, y)/∂xi| (top-k)-are set correctly, and the rest are set to −1 or +1 at random. The misclassification rate shown considers only images that were correctly classified (with no adversarial perturbation). In accordance with the models' accuracy, there were 987, 962, and 792 such images for MNIST, CIFAR10, and IMAGENET out of the sampled 1000 images, respectively. These figures also serve as a validation for Theorem 1 of the main paper when compared to SignHunter's performance shown in Appendix D., y) ) on random 1000 images from the corresponding evaluation dataset, with the maximum allowed 2 perturbation being set to 3, 127, and 5, respectively. Compared to Figure 4, the performance on MNIST and CIFAR10 drops significantly. In this section, we present a proof of Theorem 1 of Section 3. Note that the theorem makes the assumption that the finite difference is a good approximation of the directional derivative. This assumption has been the core concept behind most of the black-box adversarial attack algorithms and we state it here for completeness. Theorem 1. (Optimality of SignHunter) Given 2 log(n)+1 queries and that the directional derivative is well approximated by the finite-difference (Eq. 1 in the main paper), SignHunter is at least as effective as FGSM in crafting adversarial examples. Proof. Based on the separability property of the directional derivative, the ith coordinate of the gradient sign vector can be recovered as follows: construct two binary codes u and v such that only their ith bit is different. Therefore, we have From the definition of SignHunter, this is carried out for all the n coordinates after 2 log(n)+1 queries. Put it differently, after 2 log(n)+1 queries, SignHunter has flipped every coordinate alone recovering its sign exactly as shown in Eq. 4 above. Therefore, the gradient sign vector is fully recovered, and one can employ the FGSM attack to craft an adversarial example. Note that this is under the assumption that our finite difference approximation of the directional derivative (Eq. 1 in the main paper) is good enough (or at least rank-preserving). This section outlines the experiments setup. To ensure a fair comparison among the considered algorithms, we did our best in tuning their hyperparameters. Initially, the hyperparameters were set to the values reported by the corresponding authors, for which we observed suboptimal performance. We made use of a synthetic concave loss function to efficiently tune the algorithms for each dataset × perturbation constraint combination. The performance curves on the synthetic loss function using the tuned values of the hyperparameters did show consistency with the reported from the literature. For instance, we noted that ZO-SignSGD converges faster than NES, and that BanditsT D outperformed the rest of the algorithms towards the end of query budget. Further, in our adversarial examples generation experiments, we observed failure rate and query efficiency in line with the algorithms' corresponding papers-e.g., compare the performance of BanditsT D and NES in Table 9 of Appendix D with (, Table 1). That said, we invite the community to provide their best tuned attacks. Note that SignHunter does not have any hyperparameters to tune. The finite difference probe δ for SignHunter is set to the perturbation bound as it is used for for both computing the finite difference and crafting the adversarial examples-see Line 1 in Algorithm 2 of the main paper. This tuning-free setup of SignHunter offers a robust edge over the state-of-the-art black-box attacks, which often require expert knowledge to carefully tune their parameters. Table 3 describes the general setup for the experiments. Table 2 lists the sources of the models we attacked in this work, while Tables 4, 5, 6, and 7 outline the algorithms' hyperparameters. Figure 6 shows the performance of the considered algorithms on a synthetic concave loss function after tuning their hyperparameters. All experiments were run on a CUDA-enabled NVIDIA Tesla V100 16GB. A possible explanation of SignHunter's superb performance is that the synthetic loss function is well-behaved in terms of its gradient given an image. That is, most of gradient coordinates share the same sign, since pixels tend to have the same values and the optimal value for all the pixels is the same. Thus, SignHunter will recover the true gradient sign with as few queries as possible (recall the example in Section 3 of the main paper). Moreover, given the structure of the synthetic loss function, the optimal loss value is always at the boundary of the perturbation region; the boundary is where SignHunter samples its perturbations., where x * = xmin+xmax 2 using a query limit of 1000 queries for each image. Note that in all, Bandits T D outperforms both NES and ZO-SignSGD. Also, we observe the same behavior reported by on the fast convergence of ZO-SignSGD compared to NES. We did not tune SignHunter; it does not have any tunable parameters. This section shows of our experiments in crafting adversarial black-box examples on standard models in the form of tables and performance traces, namely Figures 7, 8, and 9; and Tables 8, 9, and 10. Table 8: Summary of attacks effectiveness on MNIST under ∞ and 2 perturbation constraints, and with a query limit of 10, 000 queries. The Failure Rate ∈ column lists the fraction of failed attacks over 1000 images. The Avg. # Queries column reports the average number of queries made to the loss oracle only over successful attacks. Table 10: Summary of attacks effectiveness on IMAGENET under ∞ and 2 perturbation constraints, and with a query limit of 10, 000 queries. The Failure Rate ∈ column lists the fraction of failed attacks over 1000 images. The Avg. # Queries column reports the average number of queries made to the loss oracle only over successful attacks. Hamming Similarity row shows the Hamming similarity of the sign of the attack's estimated gradientĝ with true gradient's sign q *, computed as 1 − ||sign(ĝ) − q * || H /n and averaged over all images. Likewise, plots of the Avg. Cosine Similarity row show the normalized dot product ofĝ and g * averaged over all images. The Success Rate row reports the attacks' cumulative distribution functions for the number of queries required to carry out a successful attack up to the query limit of 10, 000 queries. The Avg. # Queries row reports the average number of queries used per successful image for each attack when reaching a specified success rate: the more effective the attack, the closer its curve is to the bottom right of the plot. Hamming Similarity row shows the Hamming similarity of the sign of the attack's estimated gradientĝ with true gradient's sign q *, computed as 1 − ||sign(ĝ) − q * || H /n and averaged over all images. Likewise, plots of the Avg. Cosine Similarity row show the normalized dot product ofĝ and g * averaged over all images. The Success Rate row reports the attacks' cumulative distribution functions for the number of queries required to carry out a successful attack up to the query limit of 10, 000 queries. The Avg. # Queries row reports the average number of queries used per successful image for each attack when reaching a specified success rate: the more effective the attack, the closer its curve is to the bottom right of the plot. Figure 9: Performance curves of attacks on IMAGENET for ∞ (first column) and 2 (second column) perturbation constraints. Plots of Avg. Loss row reports the loss as a function of the number of queries averaged over all images. The Avg. Hamming Similarity row shows the Hamming similarity of the sign of the attack's estimated gradientĝ with true gradient's sign q *, computed as 1 − ||sign(ĝ) − q * || H /n and averaged over all images. Likewise, plots of the Avg. Cosine Similarity row show the normalized dot product ofĝ and g * averaged over all images. The Success Rate row reports the attacks' cumulative distribution functions for the number of queries required to carry out a successful attack up to the query limit of 10, 000 queries. The Avg. # Queries row reports the average number of queries used per successful image for each attack when reaching a specified success rate: the more effective the attack, the closer its curve is to the bottom right of the plot. This section shows of our experiments in crafting adversarial black-box examples on adversarially trained models in the form of tables and performance traces, namely Tables 11, 12, Model Accuracy SignHunter (Algorithm 2 in the main paper) 91.47% 92.76% PGD against 3 independently & adversarially trained copies of the network FGSM on the CW loss for model B from (Tramèr et al., 2017a) 94.36% FGSM on the CW loss for the naturally trained public network 96.08% PGD on the cross-entropy loss for the naturally trained public network 96.81% Attack using Gaussian Filter for selected pixels on the adversarially trained public network FGSM on the cross-entropy loss for the adversarially trained public network PGD on the cross-entropy loss for the adversarially trained public network SignHunter (Algorithm 2 in the main paper) 47.16% PGD on the cross-entropy loss for the adversarially trained public network PGD on the CW loss for the adversarially trained public network 64.38% FGSM on the CW loss for the adversarially trained public network 67.25% FGSM on the CW loss for the naturally trained public network 85.23% Table 13: Top 1 Error Percentage. The numbers between brackets are computed on 10,000 images from the validation set. The rest are from (Tramèr et al., 2017a, Figure 10 : Performance curves of attacks on the public black-box challenges for MNIST (first column), CIFAR10 (second column) and IMAGENET (third column). Plots of Avg. Loss row reports the loss as a function of the number of queries averaged over all images. The Avg. Hamming Similarity row shows the Hamming similarity of the sign of the attack's estimated gradientĝ with true gradient's sign q *, computed as 1 − ||sign(ĝ) − q * || H /n and averaged over all images. Likewise, plots of the Avg. Cosine Similarity row show the normalized dot product ofĝ and g * averaged over all images. The Success Rate row reports the attacks' cumulative distribution functions for the number of queries required to carry out a successful attack up to the query limit of 5, 000 queries for MNIST and CIFAR10 (1, 000 queries for IMAGENET). The Avg. # Queries row reports the average number of queries used per successful image for each attack when reaching a specified success rate: the more effective the attack, the closer its curve is to the bottom right of the plot. This section illustrates our experiment on the distribution of the magnitudes of gradient coordinates as summarized in Figure 11. How to read the plots: Consider the first histogram in Plot (a) from below; it corresponds to the 1000 th image from the sampled MNIST evaluation set, plotting the histogram of the values {|∂L(x, y)/∂xi|} 1≤i≤n, where the MNIST dataset has dimensionality n = 784. These values are in the range [0, 0.002]. Overall, the values are fairly concentrated-with exceptions, in Plot (e) for instance, the magnitudes of the ∼ 400 th image's gradient coordinates are spread from 0 to ∼ 0.055. Original Images In this section, we show the performance of different sign flip schemes in comparison to SignHunter. Results are summarized in Figure 12. SignHunter's adaptive flips shows a clear advantage over other schemes despite having a worse upper-bound on the query complexity-e.g., Naive can retrieve the signs in n + 2 queries, as discussed in Section 3. In this section, we discuss recent work related to our proposition. Parsimonious Black-Box Adversarial Attacks . Our experiment on the public CIFAR10 black-box attack challenge corresponds to [1, Table 1]. The authors report a 48% success rate (52% model accuracy) with an average number of queries of 1261. On the other hand, our proposed algorithm achieves a 52.84% success rate (47.16% model accuracy) with an average number of queries of 569. Further, (, Table 2) corresponds to our in Appendix D, Table 9; the paper reports a 98.5% success rate with an average number of queries of 722. Our proposed algorithm achieves a 98% success rate with 578.56 average number of queries. Based on these numbers, SignHunter demonstrates better performance than 's attack. Simple Black-Box Attack (SIMBA) . The main distinction is that SIMBA performs a ternary flip over {−δ, 0, +δ} for one random single coordinate at an iteration with δ ≤. On the other hand, SignHunter performs a binary flip {−,} for a group of coordinates at an iteration. Most of's experiments were performed for the 2 perturbation constraint and against models different from those considered in this paper-except for the IMAGENET v3 model, which the authors find much more difficult to attack. The v3 curves at 10, 000 queries in (, Figure 4) for SIMBA (and its variant SIMBA-DCT) look comparable to SignHunter's of Figure 9. For completeness, we implemented SIMBA and evaluated it against the CIFAR10 model in Section 4. The are shown in Figure 13. In line with , SIMBA is a strong baseline in the 2 setup. However, its performance drops significantly in the ∞ setup. Table 4. Harmonica . Both SignHunter and Harmonica seek to optimize a black-box function over the binary hypercube {±1} n, albeit with different assumptions on the objective function. Harmonica assumes that the objective function can be approximated by a sparse and low degree polynomial in the Fourier basis. Our assumption with SignHunter is that the objective function is separable (Property 1, Section 3), this lets us optimize the black-box function with O(n) queries given an initial guess instead of searching over the 2 n vertices. If this assumption is not met, we can restart SignHunter with another guess with a search complexity of O(mn) where m is the number of restarts. With this difference in assumptions of the two algorithms, we conducted an empirical comparison using the two sample problems provided along with Harmonica's authors implementation. As shown in Table 14, the show that SignHunter optimizes the two problems with 8× less number of queries than Harmonica, not to mention the significant computational advantage. In other words, SignHunter in − 2 perturbation setup behaves exactly the same when used in / √ n − ∞ perturbation setup. This is illustrated in Figure 14 To highlight the additional perturbation space that other algorithms have over SignHunter in the 2 setup, we ran NES and BanditsT D as representative examples of standard and dimensionality-reduction-based algorithms against the CIFAR10 model used in Section 4 with an ∞ perturbation setup of = 127/ √ n. In this and and the 2 setup used in Section 4, SignHunter behaves the same, while the performance of NES and BanditsT D drops significantly from their 2 performance due to the reduction in the perturbation space. A possible fix to allow SignHunter to access the additional search space introduced in the 2 setup is to extend the notion of binary sign flips over {+1, −1} to ternary sign flips over {+1, 0, −1} and we intend to explore this thoroughly in a future work. Figure 15: Performance of black-box attacks in the ∞ and 2 perturbation constraints. The plots show the average number of queries used per successful image for each attack when reaching a specified success rate. Note that (b) is similar to the 2 setup examined in Section 4. | We present a sign-based, rather than magnitude-based, gradient estimation approach that shifts gradient estimation from continuous to binary black-box optimization. | 1,269 | scitldr |
Recurrent Neural Networks (RNNs) are widely used models for sequence data. Just like for feedforward networks, it has become common to build "deep" RNNs, i.e., stack multiple recurrent layers to obtain higher-level abstractions of the data. However, this works only for a handful of layers. Unlike feedforward networks, stacking more than a few recurrent units (e.g., LSTM cells) usually hurts model performance, the reason being vanishing or exploding gradients during training. We investigate the training of multi-layer RNNs and examine the magnitude of the gradients as they propagate through the network. We show that, depending on the structure of the basic recurrent unit, the gradients are systematically attenuated or amplified, so that with an increasing depth they tend to vanish, respectively explode. Based on our analysis we design a new type of gated cell that better preserves gradient magnitude, and therefore makes it possible to train deeper RNNs. We experimentally validate our design with five different sequence modelling tasks on three different datasets. The proposed stackable recurrent (STAR) cell allows for substantially deeper recurrent architectures, with improved performance. Recurrent Neural Networks (RNN) have established themselves as a powerful tool for modelling sequential data. They have significantly advanced a number of applications, notably language processing and speech recognition (; ;). The basic building block of an RNN is a computational unit (or cell) that combines two inputs: the data of the current time step in the sequence and the unit's own output from the previous time step. While RNNs are an effective approach that can in principle handle sequences of arbitrary and varying length, they are (in their basic form) challenged by long-term dependencies, since learning those would require the propagation of gradients over many time steps. To alleviate this limitation, gated architectures have been proposed, most prominently Long Short-Term Memory (LSTM) cells and Gated Recurrent Units . They use a gating mechanism to store and propagate information over longer time intervals, thus mitigating the vanishing gradient problem. Although such networks can, in principle, capture long-term dependencies, it is known that more abstract and longer-term features are often represented better by deeper architectures . To that end, multiple recurrent cells are stacked on top of each other in a feedforward manner, i.e., the output (or the hidden state) of the lower cell is connected to the input gate of the next-higher cell. Many works have used such deep recurrent architectures, e.g., , and have shown their ability to extract more complex features from the input and make better predictions. The need for multi-layer RNNs is particularly apparent for image-like input data, where multiple convolutional layers are required to extract a good representation, while the recurrence captures the evolution of each layer over time. Since recurrent architectures are trained by propagating gradients across time, it is convenient to "unwrap" them into a lattice with two axes for depth (abstraction level) and time, see Fig. 1. This view makes it apparent that gradients flow in two directions, namely backwards in time and downwards from deeper to shallower layers. In this paper we ask the question how the basic recurrent unit must be designed to ensure the "vertical" gradient flow across layers is stable and not impaired by vanishing or exploding gradients. We show that stacking several layers of common RNN cells, by their construction, leads to instabilities (e.g., for deep LSTMs the gradients tend to vanish; for deep vanilla RNNs they tend to explode). Our study makes three contributions: (i) We analyse how the magnitude of the gradient changes as it propagates through a cell of the two-dimensional deep RNN lattice. We show that, depending on the inner architecture of the employed RNN cell, gradients tend to be either amplified or attenuated. As the depth increases, the repeated amplification (resp., attenuation) increases the risk of exploding (resp., vanishing) gradients. (ii) We then leverage our analysis to design a new form of gated cell, termed the STAR (stackable recurrent) unit, which better preserves the gradient magnitude inside the RNN lattice. It can therefore be stacked to much greater depth and still remains trainable. (iii) Finally, we compare deep recurrent architectures built from different basic cells in an extensive set of experiments with three popular datasets. The confirm our analysis: training deep recurrent nets fail with most conventional units, whereas the proposed STAR unit allows for significantly deeper architectures. In several cases, the ability to go deeper also leads to improved performance. Vanishing or exploding gradients during training are a long-standing problem of recurrent (and other) neural networks . Perhaps the most effective measure to address them so far has been to introduce gating mechanisms in the RNN structure, as first proposed by in the form of the LSTM (long short-term memory), and later by other architectures such as gated recurrent units . Importantly, RNN training needs proper initialisation. and have shown that initializing the weight matrices with identity and orthogonal matrices can be useful to stabilise the training. and further develop this idea and impose orthogonality throughout the entire training to keep the amplification factor of the weight matrices close to unity, leading to a more stable gradient flow. Unfortunately, it has been shown that such hard orthogonality constraints hurt the representation power of the model and in some cases even destabilise the optimisation. Another line of work has studied ways to mitigate the vanishing gradient problem by introducing additional (skip) connections across time and/or layers. have shown that skipping state updates in RNN shrinks the effective computation graph and thereby helps to learn longer-range dependencies. introduce a residual connection between LSTM layers. propose a gated feedback RNN that extends the stacked RNN architecture with extra connections. An obvious disadvantage of such an architecture are the extra computation and memory costs of the additional connections. Moreover, the authors only report for rather shallow networks up to 3 layers. Despite the described efforts, it remains challenging to train deep RNNs. have proposed Recurrent Highway Networks (RHN) that combine LSTMs and highway networks to train deeper architectures. RHN are popular and perform well on language modelling tasks, but they are still prone to exploding gradients, as illustrated in our experiments. (a) propose a restricted RNN where all interactions are removed between neurons in the hidden state of a layer. This appears to greatly reduce the exploding gradient problem (allowing up to 21 layers), at the cost of a much lower representation power per layer. To process image sequence data, computer vision systems often rely on Convolutional LSTMs (convLSTM,). But while very deep CNNs are very effective and now standard , stacks of more than a few convLSTMs do not train well. In practice, shallow versions are preferred, for instance (b) use a single layer for action recognition, and use two layers to recognize for hand gestures (combined with a deeper feature extractor without recursion). We note that attempts to construct a deep counterpart to the Kalman filter can also be interpreted as recurrent networks (; ;). These provide a probabilistic, generative perspective on RNNs, but are even more complicated to train. It is at this point unclear how the basic units of these architectures could be stacked into a deep, multi-layer representation. A RNN cell is a non-linear transformation that maps the input signal x t at time t and the hidden state of the previous time step t − 1 to the current hidden state h t: with W the trainable parameters of the cell. The input sequences have an overall length of T, which can be variable. It depends on the task whether the relevant target prediction, for which also the loss L should be computed, is the final state h T, the complete sequence of states {h t}, or a single sequence label, typically defined as the average When stacking multiple RNN cells on top of each other, one passes the hidden state of the lower level l − 1 as input to the next-higher level l, see Fig. 1, which in mathematical terms corresponds to the recurrence relation h Temporal unfolding leads to a two-dimensional lattice with depth L and length T, as in Fig. 1. In this computation diagram, the forward pass runs from left to right and from bottom to top. Gradients flow in opposite direction: at each cell the gradient w.r.t. the loss arrives at the output gate and is used to compute the gradient w.r.t. (i) the weights, (ii) the input, and (iii) the previous hidden state. The latter two gradients are then propagated through the respective gates to the preceding cells in time and depth. In the following, we investigate how the magnitude of these gradients changes across the lattice. The analysis, backed up by numerical simulations, shows that common RNN cells are biased towards attenuating or amplifying the gradients, and thus prone to destabilising the training of deep recurrent networks. At a single cell in the lattice the gradient w.r.t. the trainable weights are where ∂h l t ∂w denotes the Jacobian matrix and g h l t is a column vector containing the partial derivatives of the loss w.r.t. the cell's output (hidden) state. From the equation, it becomes clear that the Jacobian acts as a "gain matrix" on the gradients, and should on average preserve their magnitude to prevent them from vanishing or exploding. By expanding the gradient g h l t we obtain the recurrence for propagation, from which we get the two Jacobians where D x denotes a diagonal matrix with the elements of vector x as diagonal entries. Ideally, we would like to know the expected values of the two matrices' singular values. Unfortunately, there is no easy way to derive closed-form analytical expressions for them, but we can compute them for a fixed, representative point. Perhaps the most natural and illustrative choice is to set h l−1 t = h l t−1 = b = 0, and to further choose orthogonal weight matrices W h and W x, a popular initialisation strategy. Since the derivative tanh = 1, the singular values of all matrices in Eq. are equal to 1 in this configuration. and g h l t+1 we expect to obtain a gradient g h l ) over all time steps. As the gradients flow back through time and layers, for a network of vanilla RNN units they get amplified; for LSTM units they get attenuated; whereas the proposed STAR unit approximately preserves their magnitude. in the time direction, but cannot help the flow through the layers. Again the numerical simulation support our hypothesis, as can be seen in Fig. 2. The LSTM gradients propagate relatively well backward through time, but vanish quickly towards shallower layers. We refer to the appendix for further numerical analysis, e.g., LSTMs with only a forget gate, and GRUs. Here, we briefly draw some connections between our analysis and the empirical of , who propose a gated feedback RNN (GFRNN) that extends the stacked RNN architecture with extra connections between adjacent layers. Empirically, GFRNN improves a 3-layer LSTM, but degrades the vanilla RNN performance. We conjecture that this might be due to the extra connections strengthening the gradient propagation. According to our findings, the additional gradient flow would benefit the LSTM, by bolstering the dwindling gradients; whereas for the vRNN, where the initial gradients are already too high, the added flow might be counterproductive. Building on the analysis above, we introduce a novel RNN cell designed to avoid vanishing or exploding gradients as much as possible. We start from the Jacobian matrix of the LSTM cell and examine in more detail which design features are responsible for the low singular values. In equation 8 we see that every multiplication with tanh non-linearities (D tanh ), gating functions (D σ ), and with their derivatives can only ever decrease the singular values of W, since all this terms are always <1. The effect is particularly pronounced for the sigmoid and its derivative, |σ (·)| ≤ 0.25 and E[|σ (x)|] = 0.5 for zero-mean, symmetric distribution of x. In particular, the output gate o l t is a sigmoid and plays a major role in shrinking the overall gradients, as it multiplicatively affects all parts of both Jacobians. As a first measure, we thus propose to remove the output gate. A secondary consequence of this measure is that now h l t and c l t carry the same information (the hidden state becomes an element-wise non-linear transformation of the cell state). To avoid this duplication and further simplify the design, we transfer the tanh non-linearity to the hidden state and remove the cell state altogether. As a final modification, we also remove the input gate i l t from the architecture. We have empirically observed that the presence of the input gate does not significantly improve performance, moreover, it actually harms the training for deeper networks. This empirical observation is in line with the of van der , who show that removing the input and output gates does not greatly affect the performance of LSTMs. More formally, our proposed STAR cell in the l-th layer takes the input h l−1 t (in the first layer, x t) at time t and non-linearly projects it to the space where the hidden vector h l lives, equation 10. Furthermore, the previous hidden state and the new input are combined into the gating variable k l t (equation 11). k l t is our analogue of the forget gate and controls how the information from previous hidden state and the new input are fused into a new hidden state. One could also intuitively interpret k l t as a sort of "Kalman gain": if it is large, the new observation is deemed reliable and dominates; otherwise the previous hidden state is conserved. The complete dynamics of the STAR unit is given by the expressions These equations lead to the following Jacobian matrices: Coming back to our previous analysis for state zero and orthogonal weight matrices, each of the two Jacobians now has singular values equal to 0.5. I.e., they lie between the vRNN cell and the LSTM cell, and when added together roughly preserve the gradient magnitude. We repeat the same numerical simulations as above for the STAR cell, and find that it indeed maintains healthy gradient magnitudes throughout most of the deep RNN, see Fig. 2. In the next section, we show also on real datasets that deep RNNs built from STAR units can be trained to a significantly greater depth. As a final remark, the proposed modifications mean that the STAR architecture requires significantly less memory. With the same input and the same capacity in the hidden state, it reduces the memory footprint to <40% of a classical LSTM and even uses slightly less memory than a recurrent highway net. A more detailed comparison is given in the appendix. We evaluate the performance of several well-known RNN baselines as well as that of the proposed STAR cell on five different sequence modelling tasks with three different datasets: sequential versions of MNIST, which are a popular common testbed for recurrent networks; the more realistic TUM dataset, where time series of intensities observed in satellite images shall be classified into different agricultural crops; and Jester, for hand gesture recognition with convolutional RNNs. The recurrent units we compare include the vRNN, the LSTM, the LSTM with only a forget gate (van der), the RHN, and the proposed STAR. The experimental protocol is similar for all tasks: For each RNN variant, we train multiple versions with different depth (number of layers). For each variant and each depth, we report the performance of the model with the lowest validation loss. Classification performance is measured by the rate of correct predictions (top-1 accuracy). Throughout, we use the orthogonal initialisation for weight matrices. Code and trained models (in Tensorflow), as well as code for the simulations (in PyTorch), will be released. Training and network details for each experiment can be found in the appendix. 97.0% 82.0% 100 11k uRNN 95.1% 91.4% 512 9k FC uRNN 96.9% 94.1% 512 270k Soft ortho 94.1% 91.4% 128 18k AntisymRNN 98 The first experiment uses the MNIST dataset . The 28×28 grey-scale images of handwritten digits are flattened into 784×1 vectors, and the 784 values are sequentially presented to the RNN. After seeing all pixels, the model predicts the digit. The second task, pMNIST, is more challenging. Before flattening the images, the pixels are shuffled with a fixed random permutation, turning correlations between spatially close pixels into non-local long-range dependencies. The model needs to remember those dependencies between distance parts of the sequence to classify the digit correctly. Fig. 3a shows the average gradient norms per layer at the start of training, for 12-layer networks built from different RNN cells. Like in the simulations above, the propagation through the network increases the gradients for the vRNN and shrinks them for the LSTM. As the optimisation proceeds, we find that STAR remains stable, whereas all other units see a rapid decline of the gradients already within the first epoch, except for RHN, where the gradients explode, see Fig. 3b. Consequently, STAR is the only unit for which a 12-layer model can be trained, as also confirmed by the evolution of the training loss, Fig. 3c. Fig. 4 confirms that stacking into deeper architectures does benefit RNNs (except for vRNN); but it increases the risk of a catastrophic training failure. STAR is significantly more robust in that respect and can be trained up to a depth of 20 layers. On the comparatively easy and saturated MNIST data, the performance is comparable that of a successfully trained LSTM (at depth 2-8 layers, LSTM training already often fails; the displayed accuracies are averaged only over successful training runs). In this experiment, the models are evaluated on a more realistic sequence modelling problem. The task is to classify agricultural crop types using sequences of satellite images, exploiting the fact that different crops have different growing patterns over the season. The input is a time series of 26 multi-spectral Sentinel-2A satellite images with a ground resolution of 10 m, collected over a 102 km x 42 km area north of Munich, Germany between December 2015 and August 2016 (Rußwurm & Körner, 2017). The input data points for the classifier are patches of 3×3 pixels recorded in 6 spectral channels, flattened into 54×1 vectors. In the first task these vectors are sequentially presented to the RNN model, which outputs a prediction at every time step (note that for this task the correct answer can sometimes be "cloud", "snow", "cloud shadow" or "water", which are easier to recognise than many crops). In the second task, the model makes only one crop prediction for the complete sequence, via an additional layer that averages across time. From Fig. 5 we see that STAR outperforms all baselines and its again more robust to stacking. For the single-output task also the STAR network fails at 14 layers. We have not yet been able to identify the reason for this, possibly it is due to cloud cover that completely blanks out the signal over extended time windows and degrades the propagation. This experiment serves to evaluate the performance of different recurrent cells, and in particular the proposed STAR cell, in a convolutional RNN (see appendix for details about convolutional STAR). To that end, we use the 20BN-Jester dataset V1 (jes). Jester is a large collection of densely-labeled short video clips, where each clip contains a predefined hand gesture performed by a worker in front of a laptop camera or webcam. In total, the dataset includes 148'094 RGB video files of 27 types of gestures. The task is to classify which gesture is seen in a video. 32 consecutive frames of size 112×112 pixels are sequentially presented to the convolutional RNN. At the end, the model again predicts a gesture class via an averaging layer over all time steps. The outcome for convolutional RNNs is coherent with the previous , see Fig. 5c. Going deeper improves the performance of all three tested convRNNs. The improvement is strongest for convolutional STAR, and the best performance is reached at high depth (12 layers), where training the baselines mostly fails. In summary, the confirm both our intuition that depth is particularly useful for convolutional RNNs; and that STAR is more suitable for deeper architectures, where it achieves higher performance with better memory efficiency. We note that the in the shallow 1-2 layer setting the conventional LSTM performs a bit better than the two others, likely due to its larger capacity. We have investigated the problem of vanishing/exploding gradient in deep RNNs. In a first step, we analyse how the derivatives of the non-linear activation functions rescale the gradients as they propagate through the temporally unrolled network. From both, the theoretical analysis, and associated numerical simulations, we find that standard RNN cells do not preserve the gradient magnitudes during backpropagation, and therefore, as the depth of the network grows, the risk that the gradients vanish or explode increases. In a second step, we have proposed a new RNN cell, termed the STAckable Recurrent unit, which better preserves gradients through deep architectures and facilitates their training. An extensive evaluation on three popular datasets confirms that STAR units can be stacked into deeper architectures than other RNN cells. We see two main directions for future work. On the one hand, it would be worthwhile to develop a more formal and thorough mathematical analysis of the gradient flow, and perhaps even derive rigorous bounds for specific cell types, that could, in turn, inform the network design. On the other hand, it appears promising to investigate whether the analysis of the gradient flows could serve as a basis for better initialisation schemes to compensate the systematic influences of the cells structure, e.g., gating functions, in the training of deep RNNs. C TRAINING DETAILS C.1 PIXEL-BY-PIXEL MNIST Following Tallec & Ollivier, chrono initialisation is applied for the bias term of k, b k. The basic idea is that k should not be too large; such that the memory h can be retained over longer time intervals. The same initialisation is used for the input and forget bias of the LSTM and the RHN and for the forget bias of LSTMw/f. For the final prediction, a feedforward layer with softmax activation converts the hidden state to a class label. The numbers of hidden units in the RNN layers are set to 128. All networks are trained for 100 epochs with batch size 100, using the Adam optimizer with learning rate 0.001, β 1 = 0.9 and β 2 = 0.999. For both tasks we use the same training schedule. Again a feedforward layer is appended to the RNN output to obtain a prediction. The numbers of hidden units in the RNN layers is set to 128. All networks are trained for 30 epochs with batch size 500, using Adam with learning rate 0.001 and β 1 = 0.9 and β 2 = 0.999. Throughout, convolution kernels are of size 3×3. Each convolutional RNN layer has 64 filters. A shallow CNN is used to convert the hidden state to a label, with 4 layers that have filter depths 128, 128, 256 and 256, respectively. All models are fitted with stochastic gradient descent (SGD) with momentum (β = 0.9). The batch size is set to 8, the learning rate starts at 0.001 and decays polynomially to 0.000001 over a total of 30 epochs. L2-regularisation with weight 0.00005 is applied to all parameters. | We analyze the gradient propagation in deep RNNs and from our analysis, we propose a new multi-layer deep RNN. | 1,270 | scitldr |
Despite its empirical success, the theoretical underpinnings of the stability, convergence and acceleration properties of batch normalization (BN) remain elusive. In this paper, we attack this problem from a modelling approach, where we perform thorough theoretical analysis on BN applied to simplified model: ordinary least squares (OLS). We discover that gradient descent on OLS with BN has interesting properties, including a scaling law, convergence for arbitrary learning rates for the weights, asymptotic acceleration effects, as well as insensitivity to choice of learning rates. We then demonstrate numerically that these findings are not specific to the OLS problem and hold qualitatively for more complex supervised learning problems. This points to a new direction towards uncovering the mathematical principles that underlies batch normalization. Batch normalization BID7 (BN) is one of the most important techniques for training deep neural networks and has proven extremely effective in avoiding gradient blowups during back-propagation and speeding up convergence. In its original introduction BID7, the desirable effects of BN are attributed to the so-called "reduction of covariate shift". However, it is unclear what this statement means in precise mathematical terms. To date, there lacks a comprehensive theoretical analysis of the effect of batch normalization. In this paper, we study the convergence and stability of gradient descent with batch normalization (BNGD) via a modeling approach. More concretely, we consider a simplified supervised learning problem: ordinary least squares regression, and analyze precisely the effect of BNGD when applied to this problem. Much akin to the mathematical modeling of physical processes, the least-squares problem serves as an idealized "model" of the effect of BN for general supervised learning tasks. A key reason for this choice is that the dynamics of GD without BN (hereafter called GD for simplicity) in least-squares regression is completely understood, thus allowing us to isolate and contrast the additional effects of batch normalization. The modeling approach proceeds in the following steps. First, we derive precise mathematical on the convergence and stability of BNGD applied to the least-squares problem. In particular, we show that BNGD converges for any constant learning rate ε ∈, regardless of the conditioning of the regression problem. This is in stark contrast with GD, where the condition number of the problem adversely affect stability and convergence. Many insights can be distilled from the analysis of the OLS model. For instance, we may attribute the stability of BNGD to an interesting scaling law governing ε and the initial condition; This scaling law is not present in GD. The preceding analysis also implies that if we are allowed to use different learning rates for the BN rescaling variables (ε a) and the remaining trainable variables (ε), we may conclude that BNGD on our model converges for any ε > 0 as long as ε a ∈. Furthermore, we discover an acceleration effect of BNGD and moreover, there exist regions of ε such that the performance of BNGD is insensitive to changes in ε, which help to explain the robustness of BNGD to the choice of learning rates. We reiterate that contrary to many previous works, all the preceding statements are precise mathematical that we derive for our simplified model. The last step in our modeling approach is also the most important: we need to demonstrate that these insights are not specific features of our idealized model. Indeed, they should be true characteristics, at least in an approximate sense, of BNGD for general supervised learning problems. We do this by numerically investigating the convergence, stability and scaling behaviors of BNGD on various datasets and model architectures. We find that the key insights derived from our idealized analysis indeed correspond to practical scenarios. Batch normalization was originally introduced in BID7 and subsequently studied in further detail in BID6. Since its introduction, it has become an important practical tool to improve stability and efficiency of training deep neural networks BID5 BID2. Initial heuristic arguments attribute the desirable features of BN to concepts such as "covariate shift", which lacks a concrete mathematical interpretation and alternative explanations have been given BID17. Recent theoretical studies of BN includes BID13, where the authors proposed a variant of BN, the diminishing batch normalization (DBN) algorithm and analyzed the convergence of the DBN algorithm, showing that it converges to a stationary point of the loss function. More recently, BID1 demonstrated that the higher learning rates of batch normalization induce a regularizing effect. Another related work is BID8, where the authors also considered the convergence properties of BNGD on linear networks (similar to the least-squares problem), as well as other special problems, such as learning halfspaces and extensions. In the OLS case, the authors showed that for a particularly adaptive choice of dynamic learning rate schedule, which can be seen as a fixed effective step size in our terminology (see equation and the discussion that immediately follows), BNGD converges linearly if λ max is known. Moreover, the analysis also requires setting the rescaling parameter a every step to satisfy a stationarity condition, instead of simply performing gradient descent on a, as is done in the original BNGD.The present research differs from these previous analysis in an important way -we study the BNGD algorithm itself, and not a special variant. More specifically, we consider constant learning rates (without knowledge of properties of the OLS loss function) and we perform gradient descent on rescaling parameters. We prove that the convergence occurs for even in this case (and in fact, for arbitrarily large learning rates for ε, as long as 0 < ε a ≤ 1). This poses more challenges in the analysis and contrasts our work with previous analysis on modified versions of BNGD. This is an important distinction; While a decaying or dynamic learning rate is sometimes used in practice, in the case of BN it is critical to analyze the non-asymptotic, constant learning rate case, precisely because one of the key practical advantages of BN is that a bigger learning rate can be used than that in GD. Hence, it is desirable, as in the presented in this work, to perform our analysis in this regime. Finally, through the lens of the least-squares example, BN can be viewed as a type of overparameterization, where additional parameters, which do not increase model expressivity, are introduced to improve algorithm convergence and stability. In this sense, this is related in effect to the recent analysis of the implicit acceleration effects of over-parameterization on gradient descent BID0 ). Our paper is organized as follows. In Section 2, we outline the ordinary least squares (OLS) problem and present GD and BNGD as alternative means to solve this problem. In Section 3, we demonstrate and analyze the convergence of the BNGD for the OLS model, and in particular contrast the with the behavior of GD, which is completely known for this model. We also discuss the important insights to BNGD that these provide us with. We then validate these findings on more general supervised learning problems in Section 4. Finally, we conclude in Section 5. Consider the simple linear regression model where x ∈ R d is a random input column vector and y is the corresponding output variable. Since batch normalization is applied for each feature separately, in order to gain key insights it is sufficient to the case of y ∈ R. A noisy linear relationship is assumed between the dependent variable y and the independent variables x, i.e. y = x T w + noise where w ∈ R d is the parameters. Denote the following moments: DISPLAYFORM0 To simplify the analysis, we assume the covariance matrix H of x is positive definite and the mean E[x] of x is zero. The eigenvalues of H are denoted as λ i (H), i = 1, 2,...d,. Particularly, the maximum and minimum eigenvalue of H is denoted by λ max and λ min respectively. The condition number of H is defined as κ:= λmax λmin. Note that the positive definiteness of H allows us to define the vector norms. H and. DISPLAYFORM1 The ordinary least squares (OLS) method for estimating the unknown parameters w leads to the following optimization problem min DISPLAYFORM0 The gradient of J 0 with respect to w is ∇ w J 0 (w) = Hw − g, and the unique minimizer is w = u:= H −1 g. The gradient descent (GD) method (with step size or learning rate ε) for solving the optimization problem is given by the iterating sequence, DISPLAYFORM1 which converges if 0 < ε < 2 λmax =: ε max, and the convergence rate is determined by the spectral radius DISPLAYFORM2 It is well known (for example see Chapter 4 of BID16) that the optimal learning rate is ε opt = 2 λmax+λmin, where the convergence estimate is related to the condition number κ(H): DISPLAYFORM3 Batch normalization is a feature-wise normalization procedure typically applied to the output, which in this case is simply z = x T w. The normalization transform is defined as follows: DISPLAYFORM0 where σ:= √ w T Hw. After this rescaling, N BN (z) will be order 1, and hence in order to reintroduce the scale BID7, we multiply N BN (z) with a rescaling parameter a (Note that the shift parameter can be set zero since E[w T x|w] = 0). Hence, we get the BN version of the OLS problem: DISPLAYFORM1 The objective function J(a, w) is no longer convex. In fact, it has trivial critical points, {(a *, w *)|a * = 0, w * T g = 0}, which are saddle points of J(a, w).We are interested in the nontrivial critical points which satisfy the relations, DISPLAYFORM2 It is easy to check that the nontrivial critical points are global minimizers, and the Hessian matrix at each critical point is degenerate. Nevertheless, the saddle points are strict (Details can be found in Appendix), which typically simplifies the analysis of gradient descent on non-convex objectives BID12 BID14.Consider the gradient descent method to solve the problem, which we hereafter call batch normalization gradient descent (BNGD). We set the learning rates for a and w to be ε a and ε respectively. These may be different, for reasons which will become clear in the subsequent analysis. We thus have the following discrete-time dynamical system DISPLAYFORM3 DISPLAYFORM4 We now begin a concrete mathematical analysis of the above iteration sequence. In this section, we discuss several mathematical one can derive concretely for BNGD on the OLS problem. First, we establish a simple but useful scaling property, which an important ingredient in allowing us to prove a linear convergence for arbitrary constant learning rates. We also derive the asymptotic properties of the "effective" learning rate of BNGD (to be precisely defined subsequently), which shows some interesting sensitivity behavior of BNGD on the chosen learning rates. Detailed proofs of all presented here can be found in the Appendix. In this section, we discuss a straightforward, but useful scaling property that the BNGD iterations possess. Note that the dynamical properties of the BNGD iteration are governed by a set of numbers, or a configuration {H, u, a 0, w 0, ε a, ε}.Definition 3.1 (Equivalent configuration). Two configurations, {H, u, a 0, w 0, ε a, ε} and {H, u, a 0, w 0, ε a, ε}, are said to be equivalent if for iterates {w k}, {w k} following these configurations respectively, there is an invertible linear transformation T and a nonzero constant t such that DISPLAYFORM0 The scaling property ensures that equivalent configurations must converge or diverge together, with the same rate up to a constant multiple. Now, it is easy to check the system has the following scaling law. Proposition 3.2 (Scaling property). DISPLAYFORM1 The configurations {µQ T HQ, γ √ µ Qu, γa 0, γQw 0, ε a, ε} and {H, u, a 0, w 0, ε a, ε} are equivalent. The configurations {H, u, a 0, w 0, ε a, ε} and {H, u, a 0, rw 0, ε a, r 2 ε} are equivalent. It is worth noting that the scaling property in Proposition 3.2 originates from the batchnormalization procedure and is independent of the specific structure of the loss function. Hence, it is valid for general problems where BN is used (Lemma A.9). Despite being a simple , the scaling property is important in determining the dynamics of BNGD, and is useful in our subsequent analysis of its convergence and stability properties (see the sketch of the proof of Theorem 3.3). We have the following convergence . Theorem 3.3 (Convergence for BNGD). The iteration sequence (a k, w k) in equation FORMULA9 - converges to a stationary point for any initial value (a 0, w 0) and any ε > 0, as long as ε a ∈. Particularly, we have the following sufficient conditions of converging to global minimizers.(DISPLAYFORM0, ε a ∈ and ε is sufficiently small (the smallness is quantified by Lemma A.13), then (a k, w k) converges to a global minimizer. If ε a = 1 and ε > 0, then (a k, w k) converges to global minimizers for almost all initial values (a 0, w 0). We first prove that the algorithm converges for any ε a ∈ and small enough ε, with any initial value (a 0, w 0) such that w 0 ≥ 1 (Lemma A.13). Next, we observe that the sequence {w k} is monotone increasing, and thus either converges to a finite limit or diverges. The scaling property is then used to exclude the divergent case if {w k} diverges, then at some k the norm w k should be large enough, and by the scaling property, it is equivalent to a case where w k =1 and ε is small, which we have proved converges. This shows that w k converges to a finite limit, from which the convergence of w k and the loss function value can be established, after some work. The proof is fully presented in Theorem A.17 and preceding Lemmas. In addition, using the'strict saddle point' arguments in BID12 BID14, we can prove the set of initial value for which (a k, w k) converges to saddle points has Lebesgue measure 0, provided some conditions, such as when ε a = 1, ε > 0 (Lemma A.20). It is important to note that BNGD converges for all step size ε > 0 of w k, independent of the spectral properties of H. This is a significant advantage and is in stark contrast with GD, where the step size is limited by λ max (H), and the condition number of H intimately controls the stability and convergence rate. Although we only prove the almost sure convergence to global minimizer for the case of ε a = 1, we have not encountered convergence to saddles in the OLS experiments even for ε a ∈ with initial values (a 0, w 0) drawn from typical distributions. Now, let us consider the convergence rate of BNGD when it converges to a minimizer. Compared with GD, the update coefficient before Hw k in equation FORMULA10 changed from ε to a complicated term which we named as the effective step size or learning rateε k DISPLAYFORM0 and the recurrence relation in place of u − w k is DISPLAYFORM1 Consider the dynamics of the residual DISPLAYFORM2 k )w k, which equals 0 if and only if w k is a global minimizer. Using the property of H-norm (see section A.1), we observe that the effective learning rateε k determines the convergence rate of e k via DISPLAYFORM3 where ρ(I −ε k H) is spectral radius of the matrix I −ε k H. The inequality FORMULA3 shows that the convergence of e k (and hence the loss function, see Lemma A.23) is linear providedε k ∈ (δ, 2/λ max − δ) for some positive number δ. In fact, if we enforceε k = 1/λ max for each k, which is done in the analysis in BID8, then one immediately obtains the same linear convergence rate. But this requires knowledge of λ max (problem-dependent) and a modification the BNGD algorithm. We instead focus our analysis on the original BNGD algorithm. Next, let us discuss below an acceleration effect of BNGD over GD. When (a k, w k) is close to a minimizer, we can approximate the iteration- by a linearized system. The Hessian matrix for BNGD at a minimizer (a *, w *) is diag(1, H * / w * 2), where the matrix H * is DISPLAYFORM4 The matrix H * is positive semi-definite (H * u = 0) and has better spectral properties than H, such as a lower pseudo-condition number κ * = λ * max λ * min ≤ κ, where λ * max and λ * min are the maximal and minimal nonzero eigenvalues of H * respectively. Particularly, κ * < κ for almost all u (see section A.1). This property leads to acceleration effects of BNGD: When e k H is small, the contraction coefficient ρ in can be improved to a lower coefficient. More precisely, we have the following : Proposition 3.4. For any positive number δ ∈, if (a k, w k) is close to a minimizer, such that DISPLAYFORM5 where ρ DISPLAYFORM6 Generally, we have ρ DISPLAYFORM7, and the optimal rate is ρ * opt:= DISPLAYFORM8 where the inequality is strict for almost all u. Hence, the estimate indicates that the optimal BNGD could have a faster convergence rate than the optimal GD, especially when κ * is much smaller than κ. Finally, we discuss the dependence of the effective learning rateε k (and by extension, the effective convergence rate FORMULA3 or FORMULA5) on ε. This is in essence a sensitivity analysis on the performance of BNGD with respect to the choice of learning rate. The explicit dependence ofε k on ε is quite complex, but we can nevertheless give the following asymptotic estimates. DISPLAYFORM9 When ε is small enough, ε 1, the effective step size has a same order with ε, i.e. there are two positive constants, C 1, C 2, independent on ε and k, such that C 1 ≤ε k /ε ≤ C 2. When ε is large enough, ε 1, the effective step size has order O(ε −1), i.e. there are two positive constants, C 1, C 2, independent on ε and k, such that C 1 ≤ε k ε ≤ C 2.Observe that for finite k,ε k is a differentiable function of ε. Therefore, the above implies, via the mean value theorem, the existence of some ε 0 > 0 such that dε k /dε| ε=ε0 = 0. Consequently, there is at least some small interval of the choice of learning rates ε where the performance of BNGD is insensitive to this choice. In fact, empirically this is one commonly observed advantage of BNGD over GD, where the former typically allows for a variety of (large) learning rates to be used without adversely affecting performance. The same is not true for GD, where the convergence rate depends sensitively on the choice of learning rate. We will see later in Section 4 that although we only have a local insensitivity above, the interval of this insensitivity is actually quite large in practice. Let us first summarize our key findings and insights from the analysis of BNGD on the OLS problem.1. A scaling law governs BNGD, where certain configurations can be deemed equivalent 2. BNGD converges for any learning rate ε > 0, provided that ε a ∈. In particular, different learning rates can be used for the BN variables (a) compared with the remaining trainable variables (w) 3. There exists intervals of ε for which the performance of BNGD is not sensitive to the choice of εIn the subsequent sections, we first validate numerically these claims on the OLS model, and then show that these insights go beyond the simple OLS model we considered in the theoretical framework. In fact, much of the uncovered properties are observed in general applications of BNGD in deep learning. Here we test the convergence and stability of BNGD for the OLS model. Consider a diagonal matrix H = diag(h) where h = (1, ..., κ) is a increasing sequence. The scaling property (Proposition 3.2) allows us to set the initial value w 0 having same 2-norm with u, w 0 = u = 1. Of course, one can verify that the scaling property holds strictly in this case. FIG0 gives examples of H with different condition numbers κ. We tested the loss function of BNGD, compared with the optimal GD (i.e. GD with the optimal step size ε opt), in a large range of step sizes ε a and ε, and with different initial values of a 0. Another quantity we observe is the effective step sizeε k of BN. The are encoded by four different colors: whetherε k is close to the optimal step size ε opt, and whether loss of BNGD is less than the optimal GD. The indicate that the optimal convergence rate of BNGD can be better than GD in some configurations. This acceleration phenomenon is ascribed to the pseudo-condition number of H * (discard the only zero eigenvalue) being less than κ(H). This advantage of BNGD is significant when the (pseudo)-condition number discrepancy between H and H * is large. However, if this difference is small, the acceleration is imperceptible. This is consistent with our analysis in section 3.3.Another important observation is a region such thatε is close to ε opt, in other words, BNGD significantly extends the range of'optimal' step sizes. Consequently, we can choose step sizes in BNGD at greater liberty to obtain almost the same or better convergence rate than the optimal GD. However, the size of this region is inversely dependent on the initial condition a 0. Hence, this suggests that small a 0 at first steps may improve robustness. On the other hand, small ε a will weaken the performance of BN. The phenomenon suggests that improper initialization of the BN parameters weakens the power of BN. This experience is encountered in practice, such as BID3, where higher initial values of BN parameter are detrimental to the optimization of RNN models. whetherε k is close to the optimal step size ε opt of GD, characterized by the inequality 0.8ε opt < ε k < ε opt /0.8, and whether loss of BNGD is less than the optimal GD. Parameters: H = diag(logspace(0,log10(κ),100)), u is randomly chosen uniformly from the unit sphere in R 100, w 0 is set to Hu/ Hu. The GD and BNGD iterations are executed for k = 2000 steps with the same w 0. In each image, the range of ε a (x-axis) is 1.99 * logspace (-10,0,41), and the range of ε (y-axis) is logspace (-5,16,43). We conduct experiments on deep learning applied to standard classification datasets: MNIST , Fashion MNIST BID19 and CIFAR-10 . The goal is to explore if the key findings outlined at the beginning of this section continue to hold for more general settings. For the MNIST and Fashion MNIST dataset, we use two different networks: a one-layer fully connected network (784 × 10) with softmax mean-square loss; a fourlayer convolution network (Conv-MaxPool-Conv-MaxPool-FC-FC) with ReLU activation function and cross-entropy loss. For the CIFAR-10 dataset, we use a five-layer convolution network (ConvMaxPool-Conv-MaxPool-FC-FC-FC). All the trainable parameters are randomly initialized by the Glorot scheme BID4 before training. For all three datasets, we use a minibatch size of 100 for computing stochastic gradients. In the BNGD experiments, batch normalization is performed on all layers, the BN parameters are initialized to transform the input to zero mean/unit variance distributions, and a small regularization parameter =1e-3 is added to variance √ σ 2 + to avoid division by zero. Scaling property Theoretically, the scaling property 3.2 holds for any layer using BN. However, it may be slightly biased by the regularization parameter. Here, we test the scaling property in practical settings. Figure 2 gives the loss of network- (2CNN+2FC) at epoch=1 with different learning rate. The norm of all weights and biases are rescaled by a common factor η. We observe that the scaling property remains true for relatively large η. However, when η is small, the norm of weights are small. Therefore, the effect of the -regularization in √ σ 2 + becomes significant, causing the curves to be shifted. Stability for large learning rates We use the loss value at the end of the first epoch to characterize the performance of BNGD and GD methods. Although the training of models have generally not converged at this point, it is enough to extract some relative rate information. FIG5 shows the loss value of the networks on the three datasets. It is observed that GD and BNGD with identical learning rates for weights and BN parameters exhibit a maximum allowed learning rate, beyond which the iterations becomes unstable. On the other hand, BNGD with separate learning rates exhibits a much larger range of stability over learning rate for non-BN parameters, consistent with our theoretical in Theorem 3.3.Insensitivity of performance to learning rates Observe that BN accelerates convergence more significantly for deep networks, whereas for one-layer networks, the best performance of BNGD and Figure 2: Tests of scaling property of the 2CNN+2FC network on MNIST dataset. BN is performed on all layers, and =1e-3 is added to variance √ σ 2 +. All the trainable parameters (except the BN parameters) are randomly initialized by the Glorot scheme, and then multiplied by a same parameter η. GD are similar. Furthermore, in most cases, the range of optimal learning rates in BNGD is quite large, which is in agreement with the OLS analysis (Proposition 3.5). This phenomenon is potentially crucial for understanding the acceleration of BNGD in deep neural networks. Heuristically, the "optimal" learning rates of GD in distinct layers (depending on some effective notion of "condition number") may be vastly different. Hence, GD with a shared learning rate across all layers may not achieve the best convergence rates for all layers at the same time. In this case, it is plausible that the acceleration of BNGD is a of the decreased sensitivity of its convergence rate on the learning rate parameter over a large range of its choice. Figure 3: Performance of BNGD and GD method on MNIST (network-, 1FC), Fashion MNIST (network-, 2CNN+2FC) and CIFAR-10 (2CNN+3FC) datasets. The performance is characterized by the loss value at ephoch=1. In the BNGD method, both the shared learning rate schemes and separated learning rate scheme (learning rate lr a for BN parameters) are given. The values are averaged over 5 independent runs. In this paper, we adopted a modeling approach to investigate the dynamical properties of batch normalization. The OLS problem is chosen as a point of reference, because of its simplicity and the availability of convergence for gradient descent. Even in such a simple setting, we saw that BNGD exhibits interesting non-trivial behavior, including scaling laws, robust convergence properties, acceleration, as well as the insensitivity of performance to the choice of learning rates. Although these are derived only for the OLS model, we show via experiments that these are qualitatively valid for general scenarios encountered in deep learning, and points to a concrete way in uncovering the reasons behind the effectiveness of batch normalization. Interesting future directions include the extension of the for the OLS model to more general settings of BNGD, where we believe the scaling law (Proposition 3.2) should play a significant role. In addition, we have not touched upon another empirically observed advantage of batch normalization, which is better generalization errors. It will be interesting to see how far the current approach takes us in investigating such probabilistic aspects of BNGD. The objective function in problem has an equivalent form: DISPLAYFORM0 where u = H −1 g. The gradients are: DISPLAYFORM1 DISPLAYFORM2 The Hessian matrix is DISPLAYFORM3 where DISPLAYFORM4 DISPLAYFORM5 The objective function J(a, w) has trivial critical points, {(a *, w *)|a DISPLAYFORM6 It is obvious that a * is the minimizer of J(a, w *), but (a *, w *) is not a local minimizer of J(a, w) unless g = 0, hence (a *, w *) are saddle points of J(a, w). The Hessian matrix at those saddle points has at least a negative eigenvalue, i.e. the saddle points are strict. In fact, the eigenvalues at the saddle point (a *, w *) are On the other hand, the nontrivial critical points satisfies the relations, DISPLAYFORM7 where the sign of a * depends on the direction of u, w *, i.e. sign(a *) = sign(u T w *). It is easy to check that the nontrivial critical points are global minimizers. The Hessian matrix at those minimizers is diag(1, H * / w * 2) where the matrix H * is DISPLAYFORM8 which is positive semi-definite and has a zero eigenvalue corresponding to the eigenvector u, i.e. H * u = 0. The following lemma, similar to the well known Cauchy interlacing theorem, gives an estimate of eigenvalues of H *.Lemma A.1. If H is positive definite and H * is defined as DISPLAYFORM9, then the eigenvalues of H and H * satisfy the following inequalities: DISPLAYFORM10 Here λ i (H) means the i-th smallest eigenvalue of H.Proof. According to the definition, we have H * u = 0, and for any x ∈ R d, DISPLAYFORM11 which implies H * is semi-positive definite, and λ i (H *) ≥ λ 1 (H *) = 0. Furthermore, we have the following equality: DISPLAYFORM12 We will prove DISPLAYFORM13 In fact, using the Min-Max Theorem, we have DISPLAYFORM14 We will prove λ i (H *) ≥ λ i−1 (H) for all i, 2 ≤ i ≤ d. In fact, using the Max-Min Theorem, we have DISPLAYFORM15 where we have used the fact that DISPLAYFORM16 There are several corollaries related to the spectral property of H *. We first give some definitions. Since H * is positive semi-definite, we can define the H * -seminorm. Definition A.2. The H * -seminorm of a vector x is defined as x H *:= x T H * x. x H * = 0 if and only if x is parallel to u. DISPLAYFORM17 λ2(H *). Definition A.4. For any real number ε, the pseudo-spectral radius of the matrix I − εH * is defined as ρ DISPLAYFORM18 The following corollaries are direct consequences of Lemma A.1, hence we omit the proofs. Corollary A.5. The pseudo-condition number of H * is less than or equal to the condition number of H: DISPLAYFORM19 where the equality holds up if and only if u ⊥ span{v 1, v d}, v i is the eigenvector of H corresponding to eigenvalue λ i (H).Corollary A.6. For any vector x ∈ R d and any real number ε, we have (I − εH DISPLAYFORM20 Corollary A.7. For any positive number ε > 0, we have DISPLAYFORM21 where the inequality is strict if u DISPLAYFORM22 It is obvious that the inequality in FORMULA2 and FORMULA2 is strict for almost all u. The dynamical system defined in equation FORMULA9 - FORMULA10 is completely determined by a set of configurations {H, u, a 0, w 0, ε a, ε}. It is easy to check the system has the following scaling property: Lemma A.8 (Scaling property). Suppose µ = 0, γ = 0, r = 0, Q T Q = I, then The configurations {µQ T HQ, γ √ µ Qu, γa 0, γQw 0, ε a, ε} and {H, u, a 0, w 0, ε a, ε} are equivalent. The configurations {H, u, a 0, w 0, ε a, ε} and {H, u, a 0, rw 0, ε a, r 2 ε} are equivalent. The scaling property is valid for general loss functions provided batch normalization is used. Consider a general problem DISPLAYFORM0 and its BN version DISPLAYFORM1 Then the gradient descent method gives the following iteration, DISPLAYFORM2 DISPLAYFORM3 whereh = h(a k w k /σ k), and h is the gradient of original problem: DISPLAYFORM4 It is easy to check the general BNGD has the following property: Lemma A.9 (General scaling property). Suppose r = 0, then the configurations {w 0, ε, *} and {rw 0, r 2 ε, *} are equivalent. Here the sign * means other parameters. Recall the BNGD iterations DISPLAYFORM0 Hw k.The scaling property simplify our analysis by allowing us to set, for example, u = 1 and w 0 = 1. In the rest of this section, we only set u = 1.For the step size of a, it is easy to check that a k tends to infinity with ε a > 2 and initial value a 0 = 1, w 0 = u. Hence we only consider 0 < ε a < 2, which make the iteration of a k bounded by some constant C a. Lemma A.10 (Boundedness of a k). If the step size 0 < ε a < 2, then the sequence a k is bounded for any ε > 0 and any initial value (a 0, w 0). DISPLAYFORM1 According to the iterations, we have DISPLAYFORM2 Define DISPLAYFORM3 DISPLAYFORM4 DISPLAYFORM5 and using the property DISPLAYFORM6 u − tw H, and the property of H-norm, we have DISPLAYFORM7 Therefore we have the following lemma to make sure the iteration converge: Lemma A.11. Let 0 < ε a < 2. If there are two positive numbers ε − andε +, and the effective step sizeε k satisfies DISPLAYFORM8 for all k large enough, then the iterations converge to a minimizer. Proof. Without loss of generality, we assume FORMULA3 is satisfied for all k ≥ 0. We will prove w k converges and the direction of w k converges to the direction of u. Since w k is always increasing, we only need to prove it is bounded. We have, DISPLAYFORM9 The inequality in last lines are based on the fact that He i 2 ≤ λ max e i 2 H, and |a k | are bounded by a constant C a. Next, we will prove ∞ i=0 qi wi 2 < ∞, which implies w k are bounded. According to the estimate, we have DISPLAYFORM10 where DISPLAYFORM11. Using the definition of q k, we have DISPLAYFORM12 Since q k is bounded in [0, u T Hu], summing both side of the inequality, we get the bound of the infinite series DISPLAYFORM13 Since w k is bounded, we denoteε −:= ε − w∞ 2, and define ρ:= max DISPLAYFORM14 then the inequality implies q k+1 ≤ ρ 2 q k. As a consequence, q k tends to zero, which implies the direction of w k converges to the direction of u. The convergence of a k is a consequence of w k converging. Since a k is bounded, we assume |a k | <C a √ u T Hu,C a ≥ 1, and define ε 0:= 1 2Caκλmax. The following lemma gives the convergence for small step size. Lemma A.12. If the initial values (a 0, w 0) satisfies a 0 w T 0 g > 0, and step size satisfies ε a ∈, ε/ w 0 2 < ε 0, then the sequence (a k, w k) converges to a global minimizer. Remark 1: If we set a 0 = 0, then we have DISPLAYFORM15 Remark 2: For the case of ε a ∈, if the initial value satisfies an additional condition 0 < |a 0 | ≤ ε a |w T 0 g| σ0, then we have (a k, w k) converges to a global minimizer as well. Proof. Without loss of generality, we only consider the case of a 0 > 0, w T 0 g > 0, w 0 ≥ 1. We will prove a k > 0, w DISPLAYFORM16 On the one hand, if a k > 0, 0 < y k < 2δ, then DISPLAYFORM17 On the other hand, when a k > 0, y k > 0, ε < ε 0, we have DISPLAYFORM18 DISPLAYFORM19 As a consequence, we have a k > 0, y k ≥ δ y:= min{y 0, δ} for all k by induction. We will prove the effective step sizeε k satisfying the condition in Lemma A.11.Since a k is bounded, ε < ε 0, we havê DISPLAYFORM20 and DISPLAYFORM21 which implies DISPLAYFORM22 and there is a positive constant ε − > 0 such that DISPLAYFORM23 Employing the Lemma A.11, we conclude that (a k, w k) converges to a global minimizer. Lemma A.13. If step size satisfies ε a ∈, ε/ w 0 2 < ε 0, then the sequence (a k, w k) converges. Proof. Thanks to Lemma A.12, we only need to consider the case of a k w T k g ≤ 0 for all k, and we will prove the iteration converges to a saddle point in this case. Since the case of a k = 0 or w T k g = 0 is trivial, we assume a k w T k g < 0 below. More specifically, we will prove |a k+1 | < r|a k | for some constant r ∈, which implies convergence to a saddle point. If a k and a k+1 have same sign, hence different sign with w T k g, then we have DISPLAYFORM24 If a k and a k+1 have different signs, then we have DISPLAYFORM25 Consequently, we get DISPLAYFORM26 Setting r:= max(|1 − ε a |, 2εε a κλ max − (1 − ε a)), we finish the proof. To simplify our proofs for Theorem 3.3, we give two lemmas which are obvious but useful. Lemma A.14. If positive series f k, h k satisfy f k+1 ≤ rf k + h k, r ∈ and lim DISPLAYFORM27 Proof. It is obvious, because the series b k defined by b k+1 = rb k + h k, b 0 > 0, tends to zeros. Lemma A.15 (Separation property). For δ 0 small enough, the set S:= {w|y 2 q < δ 0, w ≥ 1} is composed by two separated parts: S 1 and S 2, dist(S 1, S 2) > 0, where in the set S 1 one has y 2 < δ 1, q > δ 2, and in S 2 one has q < δ 2, y 2 > δ 1 for some δ 1 > 0, δ 2 > 0. Here y:= w T g, q:= DISPLAYFORM28 Proof. The proof is based on H being positive. The geometric meaning is illustrated in FIG4. Proof. Denote y k:= w T k g. According to the separation property (Lemma A.15), we can chose a δ 0 > 0 small enough such that the separated parts of the set S:= {w|y 2 q < δ 0, w ≥ 1}, S 1 and S 2, have dist(S 1, S 2) > 0.Because y 2 k q k tends to zero, we have w k belongs to S for k large enough, for instance k > k 1. On the other hand, because w k+1 − w k tends to zero, we have w k+1 − w k < dist(S 1, S 2) for k large enough, for instance k > k 2. Then consider k > k 3:= max(k 1, k 2), we have all w k belongs to the same part S 1 or S 2. DISPLAYFORM29 On the other hand, if w k ∈ S 2, (y DISPLAYFORM30 Theorem A.17. Let ε a ∈ and ε > 0. The sequence (a k, w k) converges for any initial value (a 0, w 0).Proof. We will prove w k converges, then prove (a k, w k) converges as well. We will prove that w k is bounded and hence converges. In fact, according to the Lemma A.13, once w k 2 ≥ ε/ε 0 for some k, the rest of the iteration will converge, hence w k is bounded. We will prove lim k→∞ w k+1 − w k = 0, and lim DISPLAYFORM31 The convergence of w k implies k a 2 k q k is summable. As a consequence, lim DISPLAYFORM32 and lim k→∞ w k+1 − w k = 0. In fact, we have DISPLAYFORM33 Consider the iteration of series |a k − w DISPLAYFORM34 The constant C in can be chosen as C = ελmax u H λmin w0 2. Since a k e k H tends to zero, we can use Lemma A.14 to get lim DISPLAYFORM35 Combine the equation FORMULA5, then we have lim case, the iteration of (a k, w k) converges to a saddle point. However, in the latter case, (a k, w k) converges to a global minimizer. In both cases we have (a k, w k) converges. DISPLAYFORM36 To finish the proof of Theorem 3.3, we have to demonstrate the special case of ε a = 1 where the set of initial values such that BN iteration converges to saddle points is Lebeguse measure zero. We leave this demonstration in next section where we consider the case of ε a ≥ 1. In this section, we will prove the set of initial values such that BN iteration converges to saddle points is (Lebeguse) measure zero, as long as ε a ≥ 1. The tools in our proof is similar to the analysis of gradient descent on non-convex objectives BID12 BID14. In addition, we used the real analytic property of the BN loss function.For brevity, here we denote x:= (a, w) and let ε a = ε, then the BN iteration can be rewrote as DISPLAYFORM0 ) is a measure zero set, then the preimage T −1 (A) is of measure zero as well. Proof. Since T is smooth enough, according to Theorem 3 of BID15, we only need to prove the Jacobian of T (x) is nonzero for almost all x ∈ R d. In other words, the set {x : det(I − ε∇ 2 J(x)) = 0} is of measure zero. This is true because the function det(I − ε∇ 2 J(x)) is a real analytic function of x ∈ R d /{0}. (Details of properties of real analytic functions can be found in BID9 Lemma A.19. Let f : X → R be twice continuously differentiable in an open set X ⊂ R d and x * ∈ X be a stationary point of f . If ε > 0, det(I − ε∇ 2 f (x *)) = 0 and the matrix ∇ 2 f (x *) has at least a negative eigenvalue, then there exist a neighborhood U of x * such that the following set B has measure zero, DISPLAYFORM0 Proof. The detailed proof is similar to BID12 BID14.Define the transform function as F (x):= x − ε∇f (x). Since det(I − ε∇ 2 f (x *)) = 0, accorded to the inverse function theorem, there exist a neighborhood U of x * such that T has differentiable inverse. Hence T is a local C 1 diffeomorphism, which allow us to use the central-stable manifold theorem BID18. The negative eigenvalues of ∇ 2 f (x *) indicates λ max (I − ε∇ 2 f (x *)) > 1 and the dimension of the unstable manifold is at least one, which implies the set B is on a lower dimension manifold hence B is of measure zero. Lemma A.20. If ε a = ε ≥ 1, then the set of initial values such that BN iteration converges to saddle points is of Lebeguse measure zero. Proof. We will prove this argument using Lemma A.18 and Lemma A.19. Denote the saddle points set as W:= {(a *, w *): a * = 0, w * T g = 0}. The basic point is that the saddle point x *:= (a *, w *)of the BN loss function has eigenvalues For each saddle point x *:= (a *, w *) of BN loss function, ε ≥ 1 is enough to allow us to use Lemma A.19. Hence there exist a neighborhood U x * of x * such that the following set B x * is of measure zero, DISPLAYFORM1 The neighborhoods U x * of all x * ∈ W forms a cover of W, hence, accorded to Lindelöf's open cover lemma, there are countable neighborhoods {U i : i = 1, 2, ...} cover W, i.e. U:= ∪ i U i ⊇ W. As a consequence, the following set A 0 is of measure zero, DISPLAYFORM2 In the last section, we encountered the following estimate for e k = u − DISPLAYFORM0 We can improve the convergence rate of the above if H * has better spectral property. This is the content of Proposition 3.4 and the following lemma is enough to prove it. Lemma A.22. The following inequality holds, DISPLAYFORM1 DISPLAYFORM2 Through our analysis, we discovered that a modification of the BNGD, which we call MBNGD, becomes much easier to analyze and possesses better convergence properties. Note that the in the main paper do not depend on the in this section. The modification is simply to enforce a k = w Theorem B.4. The iteration sequence w k in equation FORMULA8 converges for any initial value w 0 and any step size ε > 0. Furthermore, w k will converge to a global minimizer unless w T k g = 0 for some k. Proof. Obviously, if w T k g = 0 for some k = k 0, then w k = w k0 for all k ≥ k 0, hence w k converges to w k0. Without losing generality, we consider w T k g = 0 for all k and w 0 ≥ 1 below. Firstly, we will prove that w k is bounded and hence converges. In fact, according to the Lemma B.2, once w k 2 ≥ ε/ε 0 for some k, the rest of the iteration will converge, hence w k is bounded. Secondly, we will prove w k converges to a vector parallel to u. and the above tends to zero, i.e. lim k→∞ w k+1 − w k = 0.According to the separation property (Lemma A.15), we can chose a δ 0 > 0 small enough such that the separated parts of the set S:= {w|y 2 q < δ 0, w ≥ 1}, S 1 and S 2, have dist(S 1, S 2) > 0.Because y 2 k q k tends to zero, we have w k belongs to S for k large enough, for instance k > k 1. On the other hand, because w k+1 − w k tends to zero, we have w k+1 − w k < dist(S 1, S 2) for k large enough, for instance k > k 2. Then consider k > k 3:= max(k 1, k 2), we have all w k belongs to the same part S 1 or S 2.However, Lemma B.3 says ∞ k=0 y 2 k = ∞, hence w k ∈ S 1 (q k > δ 2) for all k > k 3 is not true. Therefore w k ∈ S 2 (y 2 k > δ 1) for all k > k 3. Consequently, we can claim that ∞ k=0 q k is summable and w k converges to a vector parallel to u. Here we test the convergence and stability of MBNGD for OLS model. Consider the diagonal matrix H = diag(h), where h = (1, ..., κ) is an increasing sequence. The scaling property allows us to set the initial value w 0 having same 2-norm with u, w 0 = u = 1. FIG9 gives an example of a 5-dimensional H with condition number κ = 2000. The GD and MBNGD iteration are executed k = 5000 times where u and w 0 are randomly chosen from the unit sphere. The values of effective step size, loss e k 2 H and error e k are plotted. Furthermore, to explore the performance of GD and MBNGD, the mean values over 300 random tests are given. It is worth to note that, the geometric mean (G-mean) is more reliable than the arithmetic mean (Amean), where the geometric mean of x can be defined as exp(E(ln x)). Here the reliability means that the G-mean converges quickly when the number of tests increase, however the A-mean does not converge as quickly. In this example, the optimal convergence rate of MBNGD is observably better than GD. This acceleration phenomenon is ascribed to the pseudo-condition number of κ * (H *) being less than κ(H). However, if the difference between (pseudo-)condition number of H and H * is small, the acceleration is imperceptible. Another important observation is that the BN significantly extends the range of'optimal' step size, which is embodied by the effective step sizeε k having a large constant C inε = O(Cε −1). This means we can chose step size in BN at a large interval to get almost same or better convergence rate than that of the best choice for GD. FIG10 gives an example of 100-dimension H with condition number κ = 2000. Similar as those in the 5-dimensional case are obtained. However, the best optimal convergence rate of MBNGD here has not noticeably improved compared with GD with the optimal learning rate, which is due to the fact that large d decrease the difference between eigenvalues of H and H *.Additional tests indicate that: larger dimensions leads to larger intervals of'optimal' step size, FIG11 the effect of condition number on the'optimal' interval is small FIG12 ). FIG0 ). | We mathematically analyze the effect of batch normalization on a simple model and obtain key new insights that applies to general supervised learning. | 1,271 | scitldr |
Solving tasks in Reinforcement Learning is no easy feat. As the goal of the agent is to maximize the accumulated reward, it often learns to exploit loopholes and misspecifications in the reward signal ing in unwanted behavior. While constraints may solve this issue, there is no closed form solution for general constraints. In this work we present a novel multi-timescale approach for constrained policy optimization, called `Reward Constrained Policy Optimization' (RCPO), which uses an alternative penalty signal to guide the policy towards a constraint satisfying one. We prove the convergence of our approach and provide empirical evidence of its ability to train constraint satisfying policies. Applying Reinforcement Learning (RL) is generally a hard problem. At each state, the agent performs an action which produces a reward. The goal is to maximize the accumulated reward, hence the reward signal implicitly defines the behavior of the agent. While in computer games (e.g. BID2) there exists a pre-defined reward signal, it is not such in many real applications. An example is the Mujoco domain BID33, in which the goal is to learn to control robotic agents in tasks such as: standing up, walking, navigation and more. Considering the Humanoid domain, the agent is a 3 dimensional humanoid and the task is to walk forward as far as possible (without falling down) within a fixed amount of time. Naturally, a reward is provided based on the forward velocity in order to encourage a larger distance; however, additional reward signals are provided in order to guide the agent, for instance a bonus for staying alive, a penalty for energy usage and a penalty based on the force of impact between the feet and the floor (which should encourage less erratic behavior). Each signal is multiplied by it's own coefficient, which controls the emphasis placed on it. This approach is a multi-objective problem BID20; in which for each set of penalty coefficients, there exists a different, optimal solution, also known as Pareto optimality BID34. In practice, the exact coefficient is selected through a time consuming and a computationally intensive process of hyper-parameter tuning. As our experiments show, the coefficient is not shared across domains, a coefficient which leads to a satisfying behavior on one domain may lead to catastrophic failure on the other (issues also seen in BID17 and BID19). Constraints are a natural and consistent approach, an approach which ensures a satisfying behavior without the need for manually selecting the penalty coefficients. In constrained optimization, the task is to maximize a target function f (x) while satisfying an inequality constraint g(x) ≤ α. While constraints are a promising solution to ensuring a satisfying behavior, existing methods are limited in the type of constraints they are able to handle and the algorithms that they may support -they require a parametrization of the policy (policy gradient methods) and propagation of the constraint violation signal over the entire trajectory (e.g. BID26). This poses an issue, as Q-learning algorithms such as DQN BID21 do not learn a parametrization of the policy, and common Actor-Critic methods (e.g. BID27 BID22 BID0 Reward shaping BID29 3 BID29) build the reward-to-go based on an N-step sample and a bootstrap update from the critic. In this paper, we propose the'Reward Constrained Policy Optimization' (RCPO) algorithm. RCPO incorporates the constraint as a penalty signal into the reward function. This penalty signal guides the policy towards a constraint satisfying solution. We prove that RCPO converges almost surely, under mild assumptions, to a constraint satisfying solution (Theorem 2). In addition; we show, empirically on a toy domain and six robotics domains, that RCPO in a constraint satisfying solution while demonstrating faster convergence and improved stability (compared to the standard constraint optimization methods).Related work: Constrained Markov Decision Processes BID1 are an active field of research. CMDP applications cover a vast number of topics, such as: electric grids BID14, networking BID11, robotics BID8 BID10 BID0 BID9 and finance BID15 BID32.The main approaches to solving such problems are (i) Lagrange multipliers BID5 BID4, (ii) Trust Region BID0, (iii) integrating prior knowledge BID9 and (iv) manual selection of the penalty coefficient BID31 BID18 BID25. The novelty of our work lies in the ability to tackle general constraints (both discounted sum and mean value constraints), not only constraints which satisfy the recursive Bellman equation (i.e, discounted sum constraints) as in previous work. The algorithm is reward agnostic. That is, invariant to scaling of the underlying reward signal, and does not require the use of prior knowledge. A comparison with the different approaches is provided in TAB0. A Markov Decision Processes M is defined by the tuple (S, A, R, P, µ, γ) . Where S is the set of states, A the available actions, R: S × A × S → R is the reward function, P: S × A × S → is the transition matrix, where P (s |s, a) is the probability of transitioning from state s to s assuming action a was taken, µ: S → is the initial state distribution and γ ∈ is the discount factor for future rewards. A policy π: S → ∆ A is a probability distribution over actions and π(a|s) denotes the probability of taking action a at state s. For each state s, the value of following policy π is denoted by: DISPLAYFORM0 1 A mean valued constraint takes the form of E[DISPLAYFORM1 DISPLAYFORM2 The goal is then to maximize the expectation of the reward-to-go, given the initial state distribution µ: DISPLAYFORM3 A Constrained Markov Decision Process (CMDP) extends the MDP framework by introducing a penalty c(s, a), a constraint C(s t) = F (c(s t, a t),..., c(s N, a N)) and a threshold α ∈. A constraint may be a discounted sum (similar to the reward-to-go), the average sum and more (see BID1 for additional examples). Throughout the paper we will refer to the collection of these constraints as general constraints. We denote the expectation over the constraint by: DISPLAYFORM0 The problem thus becomes: DISPLAYFORM1 In this work we consider parametrized policies, such as neural networks. The parameters of the policy are denoted by θ and a parametrized policy as π θ. We make the following assumptions in order to ensure convergence to a constraint satisfying policy: DISPLAYFORM0 Assumption 2 is the minimal requirement in order to ensure convergence, given a general constraint, of a gradient algorithm to a feasible solution. Stricter assumptions, such as convexity, may ensure convergence to the optimal solution; however, in practice constraints are non-convex and such assumptions do not hold. Constrained MDP's are often solved using the Lagrange relaxation technique BID3. In Lagrange relaxation, the CMDP is converted into an equivalent unconstrained problem. In addition to the objective, a penalty term is added for infeasibility, thus making infeasible solutions sub-optimal. Given a CMDP, the unconstrained problem is DISPLAYFORM0 where L is the Lagrangian and λ ≥ 0 is the Lagrange multiplier (a penalty coefficient). Notice, as λ increases, the solution to converges to that of. This suggests a twotimescale approach: on the faster timescale, θ is found by solving, while on the slower timescale, λ is increased until the constraint is satisfied. The goal is to find a saddle point (θ * (λ *), λ * ) of, which is a feasible solution. Definition 1. A feasible solution of the CMDP is a solution which satisfies J π C ≤ α. We assume there isn't access to the MDP itself, but rather samples are obtained via simulation. The simulation based algorithm for the constrained optimization problem is: DISPLAYFORM1 DISPLAYFORM2 where Γ θ is a projection operator, which keeps the iterate θ k stable by projecting onto a compact and convex set. Γ λ projects λ into the range [0, λ max 4]. ∇ θ L and ∇ λ L are derived from, where the formulation for ∇ θ L is derivied using the log-likelihood trick BID35: DISPLAYFORM3 DISPLAYFORM4 η 1 (k), η 2 (k) are step-sizes which ensure that the policy update is performed on a faster timescale than that of the penalty coefficient λ. DISPLAYFORM5 Theorem 1. Under Assumption 3, as well as the standard stability assumption for the iterates and bounded noise BID6, the iterates (θ n, λ n) converge to a fixed point (a local minima) almost surely. Lemma 1. Under assumptions 1 and 2, the fixed point of Theorem 1 is a feasible solution. The proof to Theorem 1 is provided in Appendix C and to Lemma 1 in Appendix D. Recently there has been a rise in the use of Actor-Critic based approaches, for example: A3C BID22, TRPO BID27 and PPO BID29. The actor learns a policy π, whereas the critic learns the value (using temporal-difference learning -the recursive Bellman equation). While the original use of the critic was for variance reduction, it also enables training using a finite number of samples (as opposed to Monte-Carlo sampling).Our goal is to tackle general constraints (Section 2.2), as such, they are not ensured to satisfy the recursive property required to train a critic. We overcome this issue by training the actor (and critic) using an alternative, guiding, penalty -the discounted penalty. The appropriate assumptions under which the process converges to a feasible solution are provided in Theorem 2. It is important to note that; in order to ensure constraint satisfaction, λ is still optimized using Monte-Carlo sampling on the original constraint.Definition 2. The value of the discounted (guiding) penalty is defined as: DISPLAYFORM0 Definition 3. The penalized reward functions are defined as: DISPLAYFORM1 DISPLAYFORM2 As opposed to, for a fixed π and λ, the penalized value can be estimated using TD-learning critic. We denote a three-timescale (Constrained Actor Critic) process, in which the actor and critic are updated following and λ is updated following, as the'Reward Constrained Policy Optimization' (RCPO) algorithm. Algorithm 1 illustrates such a procedure and a full RCPO Advantage-Actor-Critic algorithm is provided in Appendix A.Algorithm 1 Template for an RCPO implementation DISPLAYFORM3 Initialize actor parameters θ = θ 0, critic parameters v = v 0, Lagrange multipliers and DISPLAYFORM4 Initialize state s 0 ∼ µ 5: DISPLAYFORM5 Sample action a t ∼ π, observe next state s t+1, reward r t and penalties c t 7: DISPLAYFORM6 Equation FORMULA3 8:Critic update: DISPLAYFORM7 9:Actor update: DISPLAYFORM8 10:Lagrange multiplier update: Cγ as Θ γ. Assuming that Θ γ ⊆ Θ then the'Reward Constrained Policy Optimization' (RCPO) algorithm converges almost surely to a fixed point (θ DISPLAYFORM9 DISPLAYFORM10 The proof to Theorem 2 is provided in Appendix E.The assumption in Theorem 2 demands a specific correlation between the guiding penalty signal C γ and the constraint C. Consider a robot with an average torque constraint. A policy which uses 0 torque at each time-step is a feasible solution and in turn is a local minimum of both J C and J Cγ . If such a policy is reachable from any θ (via gradient descent), this is enough in order to provide a theoretical guarantee such that J Cγ may be used as a guiding signal in order to converge to a fixed-point, which is a feasible solution. We test the RCPO algorithm in various domains: a grid-world, and 6 tasks in the Mujoco simulator BID33. The grid-world serves as an experiment to show the benefits of RCPO over the standard Primal-Dual approach (solving using Monte-Carlo simulations), whereas in the Mujoco domains we compare RCPO to reward shaping, a simpler (yet common) approach, and show the benefits of an adaptive approach to defining the cost value. While we consider mean value constraints (robotics experiments) and probabilistic constraints (i.e., Mars rover), discounted sum constraints can be immediately incorporated into our setup. We compare our approach with relevant baselines that can support these constraints. Discounted sum approaches such as BID0 and per-state constraints such as BID9 are unsuitable for comparison given the considered constraints. See TAB0 for more details. For clarity, we provide exact details in Appendix B (architecture and simulation specifics). The rover (red square) starts at the top left, a safe region of the grid, and is required to travel to the goal (orange square) which is located in the top right corner. The transition function is stochastic, the rover will move in the selected direction with probability 1 − δ and randomly otherwise. On each step, the agent receives a small negative reward r step and upon reaching the goal state a reward r goal. Crashing into a rock (yellow) causes the episode to terminate and provides a negative reward −λ. The domain is inspired by the Mars Rover domain presented in BID8. It is important to note that the domain is built such that a shorter path induces higher risk (more rocks along the path). Given a minimal failure threshold (α ∈), the task is to find λ, such that when solving for parameters δ, r step, r goal and λ, the policy will induce a path with P π θ µ (failure) ≤ α; e.g., find the shortest path while ensuring that the probability of failure is less or equal to α. As this domain is characterized by a discrete action space, we solve it using the A2C algorithm (a synchronous version of A3C BID22). We compare RCPO, using the discounted penalty C γ, with direct optimization of the Lagrange dual form. FIG1 illustrates the domain and the policies the agent has learned based on different safety requirements. Learning curves are provided in FIG0. The experiments show that, for both scenarios α = 0.01 and α = 0.5, RCPO is characterized by faster convergence (improved sample efficiency) and lower variance (a stabler learning regime). Todorov et al. FORMULA3; BID7 and provide interfaces for training agents in complex control problems. These tasks attempt to imitate scenarios encountered by robots in real life, tasks such as teaching a humanoid robot to stand up, walk, and more. The robot is composed of n joints; the state S ∈ R n×5 is composed of the coordinates (x, y, z) and angular velocity (ω θ, ω φ) of each joint. At each step the agent selects the amount of torque to apply to each joint. We chose to use PPO BID29 in order to cope with the continuous action space. In the following experiments; the aim is to prolong the motor life of the various robots, while still enabling the robot to perform the task at hand. To do so, the robot motors need to be constrained from using high torque values. This is accomplished by defining the constraint C as the average torque the agent has applied to each motor, and the per-state penalty c(s, a) becomes the amount of torque the agent decided to apply at each time step. We compare RCPO to the reward shaping approach, in which the different values of λ are selected apriori and remain constant. Learning curves are provided in FIG3 and the final values in TAB2. It is important to note that by preventing the agent from using high torque levels (limit the space of admissible policies), the agent may only be able to achieve a sub-optimal policy. RCPO aims to find the best performing policy given the constraints; that is, the policy that achieves maximal value while at the same time satisfying the constraints. Our experiments show that:1. In all domains, RCPO finds a feasible (or near feasible) solution, and, besides the Walker2d-v2 domain, exhibits superior performance when compared to the relevant reward shaping variants (constant λ values ing in constraint satisfaction). Results are considered valid only if they are at or below the threshold. RCPO is our approach, whereas each λ value is a PPO simulation with a fixed penalty coefficient. Y axis is the average reward and the X axis represents the number of samples (steps).2. Selecting a constant coefficient λ such that the policy satisfies the constraint is not a trivial task, ing in different across domains BID0. When performing reward shaping (selecting a fixed λ value), the experiments show that in domains where the agent attains a high value, the penalty coefficient is required to be larger in order for the solution to satisfy the constraints. However, in domains where the agent attains a relatively low value, the same penalty coefficients can lead to drastically different behavior -often with severely sub-optimal solutions (e.g. Ant-v2 compared to Swimmer-v2).Additionally, in RL, the value (J π R) increases as training progresses, this suggests that a non-adaptive approach is prone to converge to sub-optimal solutions; when the penalty is large, it is plausible that at the beginning of training the agent will only focus on constraint satisfaction and ignore the underlying reward signal, quickly converging to a local minima. We introduced a novel constrained actor-critic approach, named'Reward Constrained Policy Optimization' (RCPO). RCPO uses a multi-timescale approach; on the fast timescale an alternative, discounted, objective is estimated using a TD-critic; on the intermediate timescale the policy is learned using policy gradient methods; and on the slow timescale the penalty coefficient λ is learned by ascending on the original constraint. We validate our approach using simulations on both grid-world and robotics domains and show that RCPO converges in a stable and sample efficient manner to a constraint satisfying policy. An exciting extension of this work is the combination of RCPO with CPO BID0. As they consider the discounted penalty, our guiding signal, it might be possible to combine both approaches. Such an approach will be able to solve complex constraints while enjoying feasibility guarantees during training. The original Advantage Actor Critic algorithm is in gray, whereas our additions are highlighted in black.1: Input: penalty function C(·), threshold α and learning rates η1, η2, η3 2: Initialize actor π(·|·; θp) and critic V (·; θv) with random weights 3: Initialize λ = 0, t = 0, s0 ∼ µ Restart 4: for T = 1, 2,..., Tmax do 5:Reset gradients dθv ← 0, dθp ← 0 and ∀i: dλi ← 0 6: tstart = t 7:while st not terminal and t − tstart < tmax do 8:Perform at according to policy π(at|st; θp) 9:Receive rt, st+1 and penalty scoreĈt 10:t ← t + 1 11: DISPLAYFORM0 for τ = t − 1, t − 2,..., tstart do 13: DISPLAYFORM1 Update θv, θp and λ 21:Set λ = max(λ, 0) Ensure weights are non-negative (Equation 4) The MDP was defined as follows: DISPLAYFORM0 In order to avoid the issue of exploration in this domain, we employ a linearly decaying random restart BID12. µ, the initial state distribution, follows the following rule: DISPLAYFORM1 where S denotes all the non-terminal states in the state space and s * is the state at the top left corner (red in FIG1). Initially the agent starts at a random state, effectively improving the exploration and reducing convergence time. As training progresses, with increasing probability, the agent starts at the top left corner, the state which we test against. The A2C architecture is the standard non-recurrent architecture, where the actor and critic share the internal representation and only hold a separate final projection layer. The input is fully-observable, being the whole grid. The network is as follows: Published as a conference paper at ICLR 2019 between the layers we apply a ReLU non-linearity. DISPLAYFORM2 As performance is noisy on such risk-sensitive environments, we evaluated the agent every 5120 episodes for a length of 1024 episodes. To reduce the initial convergence time, we start λ at 0.6 and use a learning rate lr λ = 0.000025. For these experiments we used a PyTorch BID24 implementation of PPO BID13. Notice that as in each domain the state represents the location and velocity of each joint, the number of inputs differs between domains. The network is as follows: where DiagGaussian is a multivariate Gaussian distribution layer which learns a mean (as a function of the previous layers output) and std, per each motor, from which the torque is sampled. Between each layer, a Tanh non-linearity is applied. DISPLAYFORM0 We report the online performance of the agent and run each test for a total of 1M samples. In these domains we start λ at 0 and use a learning rate lr λ = 5e − 7 which decays at a rate of κ = (1 − 1e − 9) in order to avoid oscillations. The simulations were run using Generalized Advantage Estimation BID28 with coefficient τ = 0.95 and discount factor γ = 0.99. We provide a brief proof for clarity. We refer the reader to Chapter 6 of BID6 for a full proof of convergence for two-timescale stochastic approximation processes. Initially, we assume nothing regarding the structure of the constraint as such λ max is given some finite value. The special case in which Assumption 2 holds is handled in Lemma 1.The proof of convergence to a local saddle point of the Lagrangian contains the following main steps:1. Convergence of θ-recursion: We utilize the fact that owing to projection, the θ parameter is stable. We show that the θ-recursion tracks an ODE in the asymptotic limit, for any given value of λ on the slowest timescale.2. Convergence of λ-recursion: This step is similar to earlier analysis for constrained MDPs. In particular, we show that λ-recursion in converges and the overall convergence of (θ k, λ k) is to a local saddle point (θ DISPLAYFORM0 Step 1: Due to the timescale separation, we can assume that the value of λ (updated on the slower timescale) is constant. As such it is clear that the following ODE governs the evolution of θ:θ DISPLAYFORM1 where Γ θ is a projection operator which ensures that the evolution of the ODE stays within the compact and convex set Θ: DISPLAYFORM2 As λ is considered constant, the process over θ is: DISPLAYFORM3 Thus can be seen as a discretization of the ODE. Finally, using the standard stochastic approximation arguments from BID6 concludes step 1.Step 2: We start by showing that the λ-recursion converges and then show that the whole process converges to a local saddle point of L(λ, θ).The process governing the evolution of λ: DISPLAYFORM4 is the limiting point of the θ-recursion corresponding to λ k, can be seen as the following ODE:λ DISPLAYFORM5 As shown in BID6 chapter 6, (λ n, θ n) converges to the internally chain transitive invariant sets of the ODE, DISPLAYFORM6 Finally, as seen in Theorem 2 of Chapter 2 of BID6, θ n → θ * a.s. then λ n → λ(θ *) a.s. which completes the proof. The proof is obtained by a simple extension to that of Theorem 1. Assumption 2 states that any local minima π θ of 2 satisfies the constraints, e.g. J π θ C ≤ α; additionally, BID16 show that first order methods such as gradient descent, converge almost surely to a local minima (avoiding saddle points and local maxima). Hence for λ max = ∞ (unbounded Lagrange multiplier), the process converges to a fixed point (θ * (λ *), λ * ) which is a feasible solution. As opposed to Theorem 1, in this case we are considering a three-timescale stochastic approximation scheme (the previous Theorem considered two-timescales). The proof is similar in essence to that of BID26. The full process is described as follows: DISPLAYFORM0 s∼µ log π(s, a; θ)V (λ, s t ; v k) ] DISPLAYFORM1 2 /∂v kStep 1: The value v k runs on the fastest timescale, hence it observes θ and λ as static. As the TD operator is a contraction we conclude that v k → v(λ, θ).Step 2: For the policy recursion θ k, due to the timescale differences, we can assume that the critic v has converged and that λ is static. Thus as seen in the proof of Theorem 1, θ k converges to the fixed point θ(λ, v).Step 3: As shown previously (and in BID26), (λ n, θ n, v n) → (λ(θ *), θ *, v(θ *)) a.s. Denoting by Θ = {θ : J π θ C ≤ α} the set of feasible solutions and the set of local-minimas of J π θ Cγ as Θ γ. We recall the assumption stated in Theorem 2: Assumption 4. Θ γ ⊆ Θ.Given that the assumption above holds, we may conclude that for λ max → ∞, the set of stationary points of the process are limited to a sub-set of feasible solutions of. As such the process converges a.s. to a feasible solution.1. Assumption 2 does not hold: As gradient descent algorithms descend until reaching a (local) stationary point. In such a scenario, the algorithm is only ensured to converge to some stationary solution, yet said solution is not necessarily a feasible one. As such we can only treat the constraint as a regularizing term for the policy in which λ max defines the maximal regularization allowed.2. Assumption 4 does not hold: In this case, it is not safe to assume that the gradient of may be used as a guide for solving. A Monte-Carlo approach may be used (as seen in Section 5.1) to approximate the gradients, however this does not enjoy the benefits of reduced variance and smaller samples (due to the lack of a critic). | For complex constraints in which it is not easy to estimate the gradient, we use the discounted penalty as a guiding signal. We prove that under certain assumptions it converges to a feasible solution. | 1,272 | scitldr |
The Handheld Virtual Panel (HVP) is the virtual panel attached to the non-dominant hand’s controller in virtual reality (VR). The HVP is the go-to technique for enabling menus and toolboxes in VR devices. In this paper, we investigate target acquisition performance for the HVP as a function of four factors: target width, target distance, the direction of approach with respect to gravity, and the angle of approach. Our show that all four factors have significant effects on user performance. Based on the , we propose guidelines towards the ergonomic and performant design of the HVP interfaces. With the increasing popularity of consumer virtual reality (VR), we see more and more VR apps for creativity and productivity. These apps fundamentally require menus and toolboxes for the assortment of options and controls they offer. And the interaction artifact that is quickly becoming the go-to technique for this is the handheld virtual panel (HVP). The HVP provides the primary toolbox in Google's TiltBrush (Figure 1 (left)) and Blocks, Oculus's Quill and Medium (Figure 1 (right)), and HTC Vive's MakeVR. Szalvari et al. in 1997 proposed the personal interaction panel where the user hold a tracked tablet in the second hand while doing their primary interaction with the dominant hand using a stylus. HVPs extend that concept for virtual panels anchored to the controller in the non-dominant hand and using ray-tracing instead of a stylus. There are multiple advantages to such an interaction. First, handheld windows move along with the user, so they are always within reach. Second, they do not overly clutter the user's view, unless explicitly moved by the user. Third, handheld windows take advantage of the proprioceptive sense because they are attached to the non-dominant hand. However, even with the ubiquity of HVP in products and research literature, we do not have a sense of what factors govern performance of target selection in HVPs. Consequently, there is Unpublished working draft. Not for distribution. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. a need to understand and quantify HVP target selection performance while considering these two factors: 1) hand motion here is governed by the direction of motion in relation to the ground due to the effects of gravity, and since both the target and the pointer can be moved and controlled by the user during acquisition, user's approach will vary depending on the angle of movement in addition to distance and width. We conduct a study to measure HVP target acquisition performance in relation to four factors that relate to the direction of movement with respect to gravity, the angle of movement with respect to the body, distance, and width. The show that the performance depends significantly on all four factors. Based on the , we propose guidelines towards the ergonomic design of the HVP interfaces. In 1993, Feiner et al. described three types of 2D windows in a virtual or augmented environment: Surround-fixed that are displayed at a fixed position within the surrounding, Display-fixed that are fixed at a position relative to the display itself, and Worldfixed (or Object-fixed) that are fixed to objects in the 3D world. The HVP is an instance of the object-fixed window with the object being the non-dominant hand. Before Szalvari et al.'s proposal of the personal interaction panel, other works proposed handheld panels for specific VR scenarios using pen-and-tablet interfaces where the non-dominant hand held a tablet and the dominant hand held a pen to tap or draw on the tablet. For instance, Stoakley et al.'s Windows-in-Miniature (WIM) proposes a miniature copy of the virtual world in the non-dominant hand for navigation and manipulation. Other works study effects of visual, haptic feedback for bimanual input in VR with a panel in the non-dominant hand. Lindeman et al. found that users are 21% faster in shape selection tasks when using handheld 2D panels similar to HVP compared to surround-fixed panels that float in front of the user. Similarly, Mine et al. found that pointing to a target on a handheld panel was doubly fast than a fixed floating panel in space. However, none of the earlier works examine target acquisition in the HVP with respect to multiple target sizes and distances. Further, no existing work has examined the performance of the current HVP incarnation with the current hardware and interface. Consequently, we study the effect of distance and width on movement time for the HVP. While most works on handheld panels focus on direct pen or finger input, today's commercial VR systems rely on a controller in each hand with a ray tracing approach being used from the controller on the dominant hand to the targets on the panel. As hand tracking matures and becomes closer to commercial use in VR systems, we also hope to see explorations on hand-gesture based HVP interfaces. A related thread of work is ray tracing while using whole-handed 3D movements. Whole-handed 3D movements involve multiple limb movements, requiring higher muscular force and leading to variable movement trajectories, and hence variable pointing times. Murata et al. show that the direction of hand movement significantly affects movement time for a 3D pointing task. Following works found directional differences relating to shoulder and forearm motion. Zeng et al. found that adduction movements are slower than abduction for 2D targets using hand motion in 3D space (detected by Kinect). In our case, when using the HVP in current VR apps, a right-handed user holding the controller in the right hand usually approaches a tool on the panel in the left-hand from right to left direction. We investigate varying origins and angles in our study. There are other techniques and studies on target acquisition in 3D and in VR, but they address non-handheld, non-2D panel scenarios such as 3D object selection in the scene. The yellow square shows a target to be selected. It is currently at 67.5 • at maximum distance. Aside from the traditional factors of distance and width, we need to take into account the effect of gravity for multiple starting positions and angles of movement. Figure 2 shows a participant doing the study. Similar to current HVPs, the dominant-hand controller raycasts a pointer into the scene. Figure 3 shows the HVP schematic that the user sees in VR. For selection, the user navigates the pointer on to the desired target and presses a button on the controller. The user can also move the non-dominant hand to move the target on the panel closer to the pointer. We investigated four independent variables: 1) StartPos: starting position of the pointer that determines the direction of movement with respect to gravity. StartPos has two levels, top: top-right and bottom: bottom-right position of the panel. 2) Angle: angle of movement relative to the right edge of the panel at StartPos that offers an additional level of nuance into the effect of gravity based on the angle of motion with respect to the gravity vector. It has two levels: 22.5 • & 67.5 •. Figure 3 shows the angles for the top StartPos. 3) Distance: target distance from StartPos along the line of one of two angles. It has three exponentially increasing levels: 2cm, 6cm, 18cm. 4) Width: target width. We keep the panel size constant and vary width by changing number of targets (all square shaped). Distance had three levels: 0.63cm (48X48 layout), 1.25cm (24x24), 2.5cm (12x12). Figure 3 shows the 12x12 layout. The panel size was kept slightly larger than existing panels in commercial applications to allow testing the distance factor with a larger range. In total, there were 2x2x3x3=36 conditions and a within-subjects design was used. across participants is not possible. Width was completely counterbalanced across participants. For each width, StartPos was completely counterbalanced across participants. For each width and startpos, the order of trials (consisting of Distance-Angle combinations) was randomized. Twelve (7 female, 5 male) participants took part in the study (Range: 18-29, M = 22, SD = 3.004). All participants were right-handed and did not have any experience with VR. We believe the will be similar for a mirrored study for left-handed users. The experimental application was developed in Unity3D. Participants wore an Oculus Rift head-mounted display and held Oculus Rift Touch Controllers, one on each hand, to interact with the VR environment. The task involved participants selecting targets on a HVP that is attached to the non-dominant hand, using the controller on the dominant hand that controls the raycast pointer. The user selects a target by clicking a button on the controller. For each trial, we measured the target acquisition time (time taken from the highlighting of the desired target until the button click), and errors (number of incorrect selections). After getting familiar with the apparatus and interface, participants performed 6 practice trials followed by the study. Before every trial, participants were required to bring the pointer back to the StartPos. The next target to be selected was highlighted 0.5s after the pointer was back at StartPos. Participants selected targets by bringing the raycasted pointer within the target's area (upon which a dark border indicated visual feedback), and pushing down on the trigger located at the back of their controller. We purposely avoided fatigue by mandating a 30s break after every 18 trials which the participants could extend if they wanted to. Upon incorrect selection, participants were not asked to redo the trial, but were given visual feedback that the selection was incorrect. Only the correct trials were part of the time analysis. Participants were instructed to perform the task as quickly and accurately as possible. At the end, a semi-structured interview was conducted. We conducted a 4-way ANOVA and found main effects of all four variables on target acquisition time. However, there were interaction effects of StartPos*Distance (F (1.224, 13.463) = 6.028, p <.05, η 2 =.354 with Greenhouse-Geisser correction (GG)) and of StartPos*Angle(F = 21.776, p <.005, η 2 =.664). Therefore, we ignore the main effects of StartPos, Angle, and Distance, and analyze the interaction effects. Since there were no interaction effects involving Width, we consider the main effect of Width (F = 104.241, p <.001, η 2 =.905). All posthoc tests described below have been conducted using Bonferroni correction. We conduct posthoc tests for Width, which show that the target acquisition time for all three widths is significantly different from each other (p < 0.001 for all). Figure 4(left) shows the effect of width on target acquisition time. Thus, the effect of Width is not affected by the other variables even though the other variables also have significant effects on time. StartPos have an interaction. We conducted 1-way ANOVAs for each of the two StartPos, top and bottom separately to see how distance affects the time in both. Figure 4(middle) shows the interaction plot. The effect of Distance is significant for both top (F = 6.856, p <.01, η 2 =.384) and bottom (F (1.142, 12.558) = 23.728, p <.001, η 2 =.683 with GG). For top, posthoc tests show the medium distance targets take significantly lower time than for small (p<.05) and large distance targets (p < .01). However, for bottom, both small and medium distance targets take significantly lower time than the large distance targets (p < 0.01, p < .001 respectively). While the large distance targets expectedly perform worst, for top, the medium distance's performance is significantly lower. This is an interesting and is possibly due to selective tilting of the controller by the participants depending on the target location. Participants use a combination of hand movement and orienting the controller in the hand to have the raycast pointer reach the target. Since the medium distance targets are in the middle of the panel, users can reach it with a combination of orientation change and hand motion. However, since even a slight orientation can in a large displacement of the raycast pointer, smaller targets would be overshot with a flick of the wrist. With bottom, since the user is moving against gravity, the small and medium distances are comparable, but very much lower than the large distance targets. Angle. The effect of angle also depends on StartPos. Figure 4(right) shows the interaction plot. For top, 22.5 • take a significantly lower time than 67.5 • (F = 11.793, p <.01, η 2 =.517). For bottom, the inverse is true with 67.5 • taking a significantly lower time than 22.5 • (F = 16.201, p <.005, η 2 =.596). Again, owing to gravity, for bottom, the 22.5 • angle requires the user to make more of an effort against gravity than 67.5 •. It's vice versa for top for the same reason. No variable had a significant effect on error rate. While error rate values decreased with width (6.5%, 3.6%, 1.8%), the differences were not significant. Unsurprisingly, majority of the participants reported the bottom starting position to be much more fatiguing. Some participants also mentioned that they thought that distance or angle had a very small effect on the difficulty of the task. The suggest that gravity played a major part even when our experiment design minimized fatigue between conditions. The effect would be much more pronounced with longer, fatigue-inducing tasks. Most current HVPs use a cube-style panel with equal vertical and horizontal sizes. One simple solution to minimize the effect of gravity would be to have HVPs that have larger horizontal widths than vertical. Our distance-based suggest that minimizing hand motion and instead relying on wrist flicks to move the raycast pointer could help performance (see ). Therefore, as opposed to having smaller panels, panel sizes can be increased (at least horizontally) to encourage the use of coarse wrist flicking. Further, the design needs to minimize motion when the user is performing tasks below the panel (for instance, creating a ground texture) and will need to go against gravity to reach the HVP. One solution here would be arrange targets on the panel such that the high frequency targets are placed at the bottom of the panel, thus making them easier to reach from the bottom, while not overtly affecting the performance from top. Another possibility is to retarget the HVP at a lower position while the non-dominant hand remains at the same position so that the user has to move less against gravity to reach the HVP. Retargeting has not been explored in the context of HVPs and could be a really useful technique to counter such effects. However, the tradeoff of increasing the visuohaptic disconnect in this case would need to be explored. Overall, we suggest three takeaways that should be considered by designers for HVPs depending on the context: 1) Panels with large horizontal widths as opposed to square shaped ones should be considered to counter effects of gravity and encourage wrist flicking, 2) Place high-frequency targets at the bottom of the panel, and 3) investigate retargeting of the HVP given the same non-dominant hand positions to minimize user motion against gravity. While our work indicates some concrete directions to better the design of HVPs, one aspect that we did not explore in detail is the potential for HVPs to support bimanual parallel input. HVP is based on Guiard's kinematic chain model for bimanual input, which proposes principles of asymmetric two-handed interface design. However, bimanual input may not always be useful. Buxton et al. investigate parallelism, i.e., the degree to which the two hands are working parallelly, and concluded that participants are capable of parallelism and this improves task performance, but its use and efficiency depends on the mechanics of the task. Kabbash et al. further showed that if a 2D task followed Guiard's model, it improves performance, and not following the model can worsen bimanual performance. With the HVP, users can potentially parallelly move both hands according to Guiard's kinematic chain model and improve their speed and performance. In addition to retargeting, bimanual parallel input is a promising direction for future exploration. The handheld virtual panel is the most popular technique for accessing tools or menus in commercial VR creativity and productivity applications. In this paper, we conduct an evaluation of the target acquisition performance in the HVP as a measure of four variables. Our show that all four have an effect on user performance. While there are expected effects such as reducing acquisition time with increasing width, the evaluation also suggests that gravity may be a crucial issue even when fatigue is minimized. Based on the , we list takeaways to help improve the design of HVPs and indicate paths for future explorations. We believe addressing the limitations of HVPs uncovered in our study will go a long way in improving the user experience of HVP-based VR applications. | The paper investigates target acquisition for handheld virtual panels in VR and shows that target width, distance, direction of approach with respect to gravity, and angle of approach, all impact user performance. | 1,273 | scitldr |
Deep neural networks have demonstrated unprecedented success in various knowledge management applications. However, the networks created are often very complex, with large numbers of trainable edges which require extensive computational resources. We note that many successful networks nevertheless often contain large numbers of redundant edges. Moreover, many of these edges may have negligible contributions towards the overall network performance. In this paper, we propose a novel iSparse framework and experimentally show, that we can sparsify the network, by 30-50%, without impacting the network performance. iSparse leverages a novel edge significance score, E, to determine the importance of an edge with respect to the final network output. Furthermore, iSparse can be applied both while training a model or on top of a pre-trained model, making it a retraining-free approach - leading to a minimal computational overhead. Comparisons of iSparse against PFEC, NISP, DropConnect, and Retraining-Free on benchmark datasets show that iSparse leads to effective network sparsifications. Deep neural networks (DNNs), particularly convolutional neural networks (CNN), have shown impressive success in many applications, such as facial recognition , time series analysis , speech recognition , object classification , and video surveillance (Karpathy & et. at., 2014). As the term "deep" neural networks implies, this success often relies on large networks, with large number of trainable edges (weights) (; ; ;). While a large number of trainable edges help generalize the network for complex and diverse patterns in large-scale datasets, this often comes with enormous computation cost to account for the non-linearity of the deep networks (ReLU, sigmoid, tanh). In fact, DNNs owe their recent success to hardware level innovations that render the immense computational requirements practical (Ovtcharov & et. al., 2015;). However, the benefits of hardware solutions and optimizations that can be applied to a general purpose DNN or CNN are limited and these solutions are fast reaching their limits. This has lead to significant interest in networkspecific optimization techniques, such as network compression (Choi & et. al., 2018), pruning , and regularization (Srivastava & et. al., 2014;), aim to reduce the number of edges in the network. However, many of these techniques require retraining the pruned network, leading to the significant amount of computational waste. Many successful networks nevertheless often contain large numbers of redundant edges. Consider for example, the weights of sample network shown in Figure 1a. As we see here, the weight distribution is centered around zero and has significant number of weights with insignificant contribution to the network output. Such edges may add noise or non-informative information leading to reduction in the network performance. (; ;) has shown that it is possible to predict 95% network parameters while only learning 5% parameters. Sparsification techniques can generally be classified into neuron/kernel sparcification and edge/weight sparcification techniques (; Ashouri et al., Figure 1 : Overview of weight distribution and model accuracies for MNIST dataset 2018): proposed to eliminate neurons that have low l2-norms of their weights, whereas proposed a neuron importance score propagation (NISP) technique where neuron importance scores (using Roffo & et. al. -See Equation 5 ) are propagated from the output layer to the input layer in a back-propagation fashion. Drop-out (Srivastava & et. al., 2014) technique instead deactivates neuron activations at random. As an edge sparsification technique, DropConnect selects edges to be sparsified randomly. showed that the network performance can be maintained by eliminating insignificant weights without modifying the network architecture. Following these works, we argue network sparsification can be a very effective tool for reducing sizes and complexities of DNNs and CNNs, without any significant loss in accuracy. However, we also argue that edge weights cannot be used "as is" for pruning the network. Instead, one needs to consider the significance of each edge within the context of their place in the network (Figure 2): "Two edges in a network with the same edge weight may have different degrees of contributions to the final network output" and in this paper, we show that it is possible to quantify significance of each edge in the network, relative to their contributions to the final network output and use these measures significance to minimize the redundancy in the network by sparsifying the weights that contributes insignificantly to network. We, therefore, propose a novel iSparse framework, and experimentally show, that we can sparsify the network, by almost 50%, without impacting the network performance. The key contributions of our proposed work are as follows: • Output-informed quantification of the significance of network parameters: Informed by the final layer network output, iSparse computes and propagates edge significant scores that measure the importance of each edge with respect to the model output (Section 3). • Retraining-free network sparsification (Sparsify-with): The proposed iSparse framework is robust to edge sparsification and can maintain network performance without having to retraining the network. This implies that one can apply iSparse on pre-trained networks, on-the-fly, to achieve the desired level of sparsification (Section 3.3) • Sparsification during training (Train-with): iSparse can also be used as a regularizer during the model training allowing for learning of sparse networks from scratch (Section 4). As the sample in Figure 1b shows, iSparse is able to achieve 30-50% sparsification with minimal impact on model accuracy. More detailed experimental comparisons (See Section 5) of iSparse against PFEC, NISP, Retraining-Free and DropConnect on benchmark datasets illustrated that iSparse leads to more effective network sparsifications. A neural network is a sequence of layers of neurons to help learn (and remember) complex nonlinear patterns in a given dataset . Recently, deep neural networks (DNNs), and particularly CNNs, which leverage recent hardware advances to increase the number of layers in the -.*,)$#*)#*/0#)10+%22#).*3. * Figure 2: Overview of iSparse sparsification, considering the n i's contribution to overall output rather than only between n i and n j neurons network to scales that were not practical until recently, (; ; ; ; Karpathy & et. at., 2014) have shown impressive success in several data analysis and machine learning applications. A typical CNN consists of feature extraction layers are responsible for learning complex patterns in the data and remember them through layer weights., by training for a weight matrix W l R m l ×n l (see Section 3.1 for further details); activation layers, which help capture non-linear patterns in the data through activation functions (σ) which maps the output from a feature extraction layer to a nonlinear space (ReLU and softmax are commonly used activation functions); and pooling layers, (sampling) which up-or down-sample the intermediate data in the network. The training process of a neural network often comprises of two key stages: forward-propagation (upstream) maps the input data, X, to an output variable,Ŷ. At each layer, we haveŶ The number of trainable parameters in a deep network can range from as low as tens of thousands to hundreds of millions (Table 1 in Section 5). The three order increase in the number trainable parameter may lead to parameters being redundant or may have negligible contribution to the overall network output. This redundancy and insignificance of the network parameters has led to advancements in network regularization, by introducing dynamic or informed sparsification in the network. These advancements can be broadly classified into two main categories: parameter pruning and parameter regularization. In particular, pruning focuses on compressing the network by eliminating the redundant or insignificant parameters. (; Han & et. al., 2016) aims to prune the parameters with near-zero weights inspired from l 1 and l 2 regularization . choose to filter out convolutional kernel with minimum weight values in given layer. Recently, minimizes the change in final network performance by eliminating the neuron that have minimal impact on the network output by leveraging neuron importance score (N L) (See Section 5.3) -computed using Inf-FS (Roffo & et. al., 2015). More complex approaches have been proposed to tackle the problem of redundancy in the network through weight quantization. (Rastegari & et. al., 2016) propose to the quantize the inputs and output activations of the layers in a CNN by using step function and also leveraging the binary operation by using the binary weights opposed to the real-values weights. (Chen & et. al., 2015) focuses on low-level mobile hardware with limited computational power, and proposed to leverage the inherent redundancy in the network for using hashing functions to compress the weights in the network. showed that the each input feature to a given layer in the network rarely have the same importance, therefore, learning there individual importance (attention) helps improve the performance of the network. More recently, has shown that input data informed deep networks can provide high-performance network configurations. In this paper, we rely on output information for identifying and eliminating insignificant parameters from the network, without having to update the edge weights or retraining the network. As discussed in Section 1, in order to tackle complex inputs, deep neural networks have gone increasingly deeper and wider. This design strategy, however, often in large numbers of insignificant edges 1 (weights), if not redundant. In this section, we describe the proposed iSparse, framework which quantifies the significance of each individual connection in the network with respect to the overall network output to determine the set of edges that can be sparsified to alleviate network redundancy and eliminate insignificant edges. iSparse aims to determine the significance of the edges in the network to make informed sparsification of the network. A typical neural network, N, can be viewed as a sequential arrangement of convolutional (C) and fully connected (F) layers:, here, X is the input, L is the total number of layers in the network and L {C, F}, s.t., any given layer, where, Y l is the input to the layer (s.t. Y l =Ŷ l−1 and for l = 1, Y 1 = X) and σ l, W l, and B l are the activation function, weight, and bias respectively. Note that, if the l th layer has m l neurons and the (l − 1) th layer has n l neurons, Given this formulation, the problem of identifying insignificant edges can be formulated as the problem of generating a sequence of binary mask matrices, M 1,..., M L, that collectively represents whether any given edge in the network is sparsified or not: Let M l be a mask matrix as defined in Equation 2, and M l can be expanded as where each M l,i,j ∈ {0, 1} corresponds to an edge e l,i,j in the network. Our goal in the paper is to develop an edge significant score measure to help set the binary value of M l,i,j for each edge in the network. More specifically, we aim to associate a non-negative real valued number, E l,i,j ≥ 0, to each edge in the network, s.t. Here, τ l (θ l) represents the lowest significance of the θ l % of the most significant edges in the layer l. Intuitively, given a target sparsification rate, θ l, we rank all the edges based on their edge significance scores and keep only the highest scoring θ l % of the edges by setting their mask values to 1. As we have seen in Figure 1a, the (signed) weight distribution of the edges in a layer is often centered around zero, with large numbers of edges having weights very close to 0. As we also argued in the Introduction, such edges can work counter-intuitively and add noise or non-informative information leading to reduction in the network performance. In fact, several existing works, such as , relies on these weights for eliminating insignificant edges without having to retrain the network architecture. However, as we also commented in the Introduction, we argue that edge weights should not be used alone for sparsifying the network. Instead, one needs to consider each edge within the context of their place in the network: Two edges in a network with the same edge weight may have different degrees of contributions to the final network output. Unlike existing works, iSparse takes this into account when selecting the edges to be sparsified (Figure 3). Figure 3: A sample network architecture and its sparsification using Retraining-free and iSparse; here node labels indicate input to the node; edge labels indicate the edge weights; and edge labels between parentheses indicate edge contribution More specifically, let W + l be the absolute positive of the weight matrix,W l, for edges in l th layer. We compute the corresponding edge significance score matrix, E l, as where, N l represents the neuron significance scores 2, N l,1 through N l,n l, and " " represents the scalar multiplication between edge weights and neuron scores. N l,i, denotes the significance of the i th input neuron to the l th connection layer of the network, which itself is defined recursively, based on the following layer in the network, using the conventional dot product: Note that N l can be expanded as Above, N L denotes the neuron scores of the final output layer, and N L is defined using infinite feature selection (Roffo & et. al., 2015;) as x is the number of input samples and n is the number of output neurons) to determine neuron importance score with respect to the the final network output. Given the above, the edge score (Equation 5) can be rewritten as Note that the significance scores of edges in layer l considers not only the weights of the edges, but also the weights of all downstream edges between these edges and the final output layer. As noted in Section 3.1, the binary values in the masking matrix M l depends on τ l (θ l), which represents the lowest significance of the θ l % of the most significant edges in the layer 3: therefore, given a target sparsification rate, θ l, for layer l, we rank all the edges based on their edge significance scores and keep only the highest scoring θ l % of the edges by setting their mask values to 1. Note that, once an edge is sparsified, change in its contribution is not propagated back to the layers earlier in the network relative to the sparsified edge. Having determined the insignificant edges with respect to the final layer output, represented in form of the mask matrix, M l (described in Section 3.1), the next step is to integrate this mask matrix in the layer itself. To achieve this, iSparse extends the layer l (Equation 1) to account for the corresponding mask matrix (M l): where, * represents the element-wise multiplication between the matrices W l and M l. Intuitively, M l facilitates introduction of informed sparsity in the layer by eliminating edges that do not contribute significantly to the final output layer. In the previous section, we discussed the computation of edge significance scores on a pre-trained network, such as of pre-trained ImageNet models, and the use of these scores for network sparsification. In this section, we highlight that iSparse can also be integrated directly within the the training process. To achieve this, the edge significance score is computed for every trainable layer in the network using the strategy described in Section 3.2 and the mask matrix is updated using Equation 4. Furthermore, the back-propagation rule, described in Section 2, is updated to account for the mask matrices: where, W l are the updated weights, W l original weights, η is the learning rate, and Err l is the error recorded by as the divergence in between ground truth (Y l) and model predictions (Ŷ l) as Note that, we argue that any edge that does not contribute towards the final model output, must not be included in the back-propagation. Therefore, we mask the error as Err l * M l. In this section, we experimentally evaluate of the proposed iSparse framework using LeNet and VGG architectures (See Section 5.2) and compare it against the approaches, such as PFEC, NISP, and DropConnect (see Section 5.3). We implemented iSparse in Python environment (3.5.2) using Keras Deep Learning Library (2.2.4-tf) with TensorFlow Backend (1.14.0). All experiments were performed on an Intel Xeon E5-2670 2.3 GHz Quad-Core Processor with 32GB RAM equipped with Nvidia Tesla P100 GPU with 16 GiB GDDR5 RAM with CUDA-10.0 and cuDNN v7.6.4. 4. In this paper, without loss of generality, we leverage LeNet-5 and VGG-16 as the baseline architectures to evaluate sparsification performance on different benchmark image classification datasets and for varying degrees of edge sparsification. In this section, we present an overview of these architectures (See Table 1). LeNet-5: Designed for recognizing handwritten digits, LeNet-5 is simple network with 5 trainable (2 convolution and 3 dense) and 2 non-trainable layers using average pooling with tanh and softmax as the hidden and output activation. LeNet's simplicity has made it a common benchmark for datasets recorded in constrained environments, such as MNIST , FMNIST , COIL (a; b), and NORB . VGG-16: VGG 's, a 16 layer network with 13 convolution and 3 dense layers, with interleaved 5 max-pooling layers. VGG leverages ReLU as the hidden activation to overcome the problem of vanishing gradient, as opposed to tanh. Given the ability of VGG network to learn the complex pattern in the real-world dataset, we use the network on benchmark datasets, such as CIFAR10/20/100 , SVHN (Netzer & et. al., 2011), GTSRB (Stallkamp & et. al., 2012), and ImageNet (Deng et. al., 2009). Table 1 reports the number of trainable parameters (or weights) for each model/data set pair considered in the experiments. We compared iSparse against several state-of-the-art network sparsification techniques: DropConnect is a purely random approach, where edges are randomly selected for sparsification. Retraining-free considers each layer independently and sparsifies insignificant weights in the layer, without accounting for the final network output contribution. PFEC is a kernel pruning strategy that aims to eliminate neurons that have low impact on the overall model accuracy. In order to determine the impact, PFEC computes the l2-norms of the weights of the neurons and ranks them, separately, for each layer. NISP proposes a neuron importance score propagation (NISP) technique where neuron importance scores are propagated from the output layer to the input layer in a back-propagation fashion. Figure 7: Mask matrices for the LeNet network conv 2 layer for MNIST data (sparsification factor = 50%): dark regions indicate the edges that have been marked for sparsification; in (e) iSparse, the arrows point to those edges that are subject to different pruning decision from retrain-free in(d) (green arrows point to edges that are kept in iSparse instead of being pruned and red arrows point to edges that are sparsified in iSparse instead of being kept) In Figure 4, we first present top-1 and top-5 classification for ImageNet dataset for VGG-16 network. As we see in the Figure 4, iSparse provides the highest robustness to the degree of sparsification in the network. In particular, with iSparse, the network can be sparsified by 50% with ≤ 6% drop in accuracy for top-1 and ≤ 2% drop in accuracy for top-5 classification, respectively. In contrast, the competitors, see larger drops in accuracy. The closest competitor, Retrain-free, suffers a loss in accuracy of ∼ 16% and ∼ 6% for top-1 and top-5 classification, respectively. The other competitors suffer significant accuracy drops after a mere 10-20% sparsification. Figures 6a and 6b show the top-1 classification accuracy for other models and data sets. As we see here, the above pattern holds for all configurations considered: iSparse provides the best robustness. It is interesting to note that DropConnect, NISP, and PFEC see especially drastic drops in accuracy for the VGG-16 network and especially on the CIFAR data. This is likely because, VGG-CIFAR is already relatively sparse (20% > sparsity as opposed to ∼ 7% for VGG-ImageNet and < 1% for LeNet) and these three techniques are not able to introduce additional sparseness in a robust manner. In contrast, iSparse is able to introduce significant additional sparsification with minimal impact on accuracy. Figure 7 provides the mask matrices created by the different algorithms to visual illustrate the key differences between the competitors. As we see in this figure, PFEC and NISP, both sparsify input neurons. Consequently, their effect is to mask out entire columns from the weight matrix and this prevents these algorithms to provide fine grain adaption during sparsification. DropConnect selects individual edges for sparsification, but only randomly and this prevents the algorithm to provide sufficiently high robustness. Retrain-free and iSparse both select edges in an fine-grained manner: retrain-free uses relies on edge-weights, whereas iSparse complements edge-weight with an edge significance measure that accounts for each edges contribution to the final output within the overall network. As we see in Figure 7 (d) and (e), this in some differences in the corresponding mask matrices, and these differences are sufficient to provide significant boost in accuracy. Tables 2 present accuracy for the scenarios where iSparse (iS) is used to sparsify the model during the training process. The table also considers DropConnect (DC) and Retrain-Free (RF), as alternatives. As we see in the table, for both network architectures, under most sparsification rates, the output informed sparsification approach underlying iSparse leads to networks with the highest classification accuracies. In this section, we study the effect of the variations in network elements. In particular, we compare the performance of iSparse (iS) against DropConnect (DC) and Retraining-Free (RF) for different hidden activation functions and network optimizers. Table 3 presents classification performances for networks that rely on different activation functions (tanh and ReLU) and for optimizers (Adam and RMSProp). As we see in these two tables, iSparse remains the alternative which provides the best classification accuracy under different activation/optimization configurations. We next investigate the performance of iSparseunder different orders in which the network layers are sparsified. In particular, we considerthree sparsification orders: (a) input-to-output layer order: this is the most intuitive approach as it does not require edge significance scores to be revised based on sparsified edges in layers closer to the input; (b) output-to-input layer-order: in this case, edges in layers closer to the network output are sparsified first -but, this implies that edge significance scores are updated in the earlier layers in the network to account for changes in the overall edge contributions to the network; (c) random layer order: in this case, to order of the layers to be sparsified is selected randomly. Figure 8 presents the sparsification for different orders, data sets, and sparsification rates. As we see in the figure, the performance of iSparse is not sensitive to the sparsification order of the network layers. In Figure 5, we investigate the impact of edge sparsification on the classification time. As we see in this Figure, edge sparsification rate has a direct impact on the classification time of the ing model. When we consider that iSparse allows for ∼ 30 − 50% edge sparsification without any major impact on classification accuracies, this indicates that iSparse has the potential to provide significant performance gains. What is especially interesting to note in Figure 5 is that, while all three sparsification methods, iSparse, DropConnect, and Retraining-Free, all have the same number of sparsified edges for a given sparsification factor, the proposed iSparse approach leads to the least execution times among the three alternatives. We argue that this is because the output informed sparsification provided by iSparse allows for more efficient computations in the sparsified space 5. In this paper, we proposed iSparse, a novel output-informed, framework for edge sparsification in deep neural networks (DNNs). In particular, we propose a novel edge significance score that quantifies the significance of each edge in the network relative to its contribution to the final network output. iSparse leverages this edge significance score to minimize the redundancy in the network by sparsifying those edges that contribute least to the final network output. Experiments, with 11 benchmark datasets and using two well-know network architectures have shown that the proposed iSparse framework enables 30 − 50% network sparsification with minimal impact on the model classification accuracy. Experiments have also shown that the iSparse is highly robust to variations in network elements (activation and model optimization functions) and that iSparse provides a much better accuracy/classification-time trade-off against competitors. | iSparse eliminates irrelevant or insignificant network edges with minimal impact on network performance by determining edge importance w.r.t. the final network output. | 1,274 | scitldr |
Language modeling tasks, in which words are predicted on the basis of a local context, have been very effective for learning word embeddings and context dependent representations of phrases. Motivated by the observation that efforts to code world knowledge into machine readable knowledge bases tend to be entity-centric, we investigate the use of a fill-in-the-blank task to learn context independent representations of entities from the contexts in which those entities were mentioned. We show that large scale training of neural models allows us to learn extremely high fidelity entity typing information, which we demonstrate with few-shot reconstruction of Wikipedia categories. Our learning approach is powerful enough to encode specialized topics such as Giro d’Italia cyclists. A long term goal of artificial intelligence has been the development and population of an entitycentric representation of human knowledge. Efforts have been made to create the knowledge representation with knowledge engineers BID10 or crowdsourcers BID1. However, these methods have relied heavily on human definitions of their ontologies, which are both limited in scope and brittle in nature. Conversely, due to recent advances in deep learning, we can now learn robust general purpose representations of words BID13 and contextualized phrases BID16 BID6 directly from large textual corpora. Consider the following context in which an entity mention is replaced with the [MASK] symbol:... [MASK], a Russian factory worker, was the first woman in space...As readers, we understand that first woman in space is a unique identifier, and we are able to fill in the blank unambiguously. The central hypothesis of this paper is that, by matching entities to the contexts in which they are mentioned, we should be able to build a representation for Valentina Tereshkova that encodes the fact that she was the first woman in space. To do this, we start with BERT BID6, a powerful pretrained text encoder, to encode contexts-Wikipedia text in which a hyperlinked span has been blanked out-and we train an entity encoder to match the BERT representation of the entity's contexts. We experiment with a lookup table that maps each entity to a fixed length vector, which we call RELIC (Representations of Entities Learned In Context). We hypothesize that the dedicated entity representations in RELIC should be able to capture knowledge that is not present in BERT. To test this, we compare RELIC to two BERT-based entity encoders: one that encodes the entity's canonical name, and one that encodes the first paragraph of the entity's Wikipedia page. Ultimately, we would like our representations to encode all of the salient information about each entity. However, for this initial work, we study our representations' ability to capture Wikipedia categorical information encoded by human experts. We show that given just a few exemplar entities of a Wikipedia category such as Giro d'Italia cyclists, we can use RELIC to recover the remaining entities of that category with good precision. Several works have tackled the problem of learning distributed representations of entities in a knowledge base (KB). Typical approaches rely on the (subject, relation, object) ontology of KBs like Free-base BID1. These methods embed the entities and relations in vector space, then maximize the score of observed triples against negative triples to do KB completion BID3 BID21 BID28 BID24.There is relatively less work in learning entity representations directly from text. The contextual approach of word2vec BID13 has been applied to entities, but there has been little analysis of how effective such a method would be for answering the entity typing queries we study in this work. Most methods for entity representations that do use raw text will combine it with structure from an existing KB BID18 BID23 BID9 in an effort to leverage as much information as possible. While there may be gains to be had from using structure, our goal in this work is to isolate and understand the limits of inducing entity representations from raw text alone. We also note the similarity of our RELIC task to entity linking BID17 BID4 BID22 BID7 and entity typing BID27 BID19, where entity mentions are processed in context. Unlike previous work in context-dependent entity typing BID11 BID5 BID15, we consider types of RELIC from a global perspective. We are interested in identifying contextindependent types of entities so that they can be used to identify structure in the entity latent space. We study the ability of current models to learn entity encoders directly from the contexts in which those entities are seen. Formally, we define an entity encoder to be a function h e = f (e) that maps each entity e to a vector h e ∈ R d. We outline the training procedure used to learn the encoders. RELIC training input Let E = {e 0 . . . e N} be a predefined set of entities, and let DISPLAYFORM0 is a sequence of words x i ∈ V. Each context contains exactly one instance of the [MASK] symbol. Our training data is a corpus of (context, entity) pairs DISPLAYFORM1. Each y i ∈ E is an entity, and the [MASK] symbol in x i substitutes for a single mention y i. For clean training data, we extract our corpus from English Wikipedia, taking advantage of its rich hyperlink structure (Section 3.2). We introduce a context encoder h x = g(x) that maps the context x into the same space R d as our entity encodings. Then we define a compatibility score between the entity e and the context x as the scaled cosine similarity s(x, e) = a · DISPLAYFORM0 ||g(x)||||f (e)|| where the scaling factor a is a learned parameter, following BID26. Now, given a context x, the conditional probability that e was the entity seen with x is defined as p(e|x) = exp(s(x,e)) e ∈E exp(s(x,e)) and we train RELIC by maximizing the average log probability 1 |D| (x,y)∈D log p(y|x). In practice, we use a noise contrastive loss BID8 BID14 ), where we sample K negative entities e 1, e 2,..., e K from a noise distribution p noise (e). Denoting e 0:= e, our per-example loss is l(s, x, e) = − log DISPLAYFORM1. We train our model with minibatch gradient descent and use all other entries in the batch as negatives. This is roughly equivalent to p noise (e) being proportional to entity frequency. BERT context encoder For g, we use the pretrained BERT model BID6, a powerful Transformer-based BID25 text encoder, to encode contexts into a fixed-size representation 1. We project the BERT hidden state into R d using a weight matrix W ∈ R d×768 to obtain our context encoding. Table 1: Results for the Wikipedia category population task. Mean Average Precision for ranking entities given a set of exemplars of a given Wikipedia class. K represents the number of candidates to be ranked, andp is the average number of positive labels among the candidates. Results are averaged over 100 categories sampled at random from those containing at least 1000 entities. Wikipedia name, and the first paragraph of its Wikipedia page. We consider three encoders that operate on different representations of the entities: embedding lookup, BERT name encoder, and BERT description encoder. In the standard RELIC setup, we map each entity, identified by its unique ID, directly onto its own dedicated vector in R d via a |E| × d dimensional embedding matrix. We also consider two alternate BERT-based options for distributed encoding of entities, which are fine-tuned on the RELIC data. The name encoder applies a BERT Transformer to the canonical name of the entity to obtain a fixedsize representation. The description encoder applies a BERT Transformer to an entity's description to obtain a fixed size representation 3. Note that both name and description encoders can do zero-shot encoding of new entities, assuming that a name or description is provided. To train RELIC, we obtain data from the 2018-10-22 dump of English Wikipedia. We take E to be the set of all entities in Wikipedia (of which there are over 5 million). For each hyperlink, we take the context as the surrounding sentence, replace all tokens in the anchor text with a single [MASK] symbol, and set the entity linked to as ground truth. We limit each context to 64 tokens. We set the entity embedding size to d = 300. For the name and description encoders, we take the initial hidden state of the Transformer as the fixed-size representation. We limit to 16 name tokens and 128 description tokens. We train the model using TensorFlow BID0 ) with a batch size of 1024 for 5 epochs. We hold out about 1/30,000 of the data for use as validation, on which the final model achieves roughly 85% accuracy in-batch negative prediction for all models. We introduce a fine-grained entity typing task based on Wikipedia categories, where the task is to populate a category from a small number of exemplars. We evaluate if RELIC benefits from dedicated embeddings over the BERT encoders that share parameters between entities. We filter the Wikipedia categories in Yago 3.1 BID12 to get the 1,129 categories that cover at least 1,000 entities and consider an exemplar based "few-shot" scenario, based on the prototypical approach of BID20. For each category, we provide a small number of exemplars (3 or 10), one correct candidate entity drawn from the category, and K −1 other candidate entities. The candidate entities are ranked according to the inner product between their RELIC embeddings and the centroid of the exemplar embeddings, and we report the mean average precision (MAP) for entities belonging to the query class. Wikipedia categories are often incompletely labeled, and when K covers all entities, this confounds the MAP calculation. Therefore, we also present for K = 10 and K = 1000 for a cleaner experimental setup. TAB2 show . RELIC outperforms both the BERT name and description encoders when we restrict the candidate set to the entities seen at least 10 times in RELIC's training data, and the gap in performance increases as we increase the entity frequency threshold. However, both the name and description encoders outperform RELIC on very infrequent entities, since they can generalize from other entities with similar naming conventions or descriptions, while RELIC's embedding matrix treats every entity completely separately. FIG0 shows examples of predictions for Wikipedia categories given 3 exemplars for 5 randomly sampled categories. Most categories show high precision in the top 10 predictions. The category Butterflies of Africa fails-this is likely due to the fact that the 3 exemplars appeared only a total of 4 times in our pretraining data. The Giro d'Italia cyclists category is very well predicted-the single incorrect prediction Thibaut Pinot did cycle in the Giro d'Italia. However, for Video games featuring female protagonists, most of RELIC's success is due to just retrieving variations of the Final Fantasy series. We demonstrated that the RELIC fill-in-the-blank task allows us to learn highly interesting representations of entities with their own latent ontology, which we empirically verify through a few-shot Wikipedia category reconstruction task. We encourage researchers to explore the properties of our entity representations and BERT context encoder, which we will release publicly. | We learn entity representations that can reconstruct Wikipedia categories with just a few exemplars. | 1,275 | scitldr |
Variational autoencoders (VAEs) have been successful at learning a low-dimensional manifold from high-dimensional data with complex dependencies. At their core, they consist of a powerful Bayesian probabilistic inference model, to capture the salient features of the data. In training, they exploit the power of variational inference, by optimizing a lower bound on the model evidence. The latent representation and the performance of VAEs are heavily influenced by the type of bound used as a cost function. Significant research work has been carried out into the development of tighter bounds than the original ELBO, to more accurately approximate the true log-likelihood. By leveraging the q-deformed logarithm in the traditional lower bounds, ELBO and IWAE, and the upper bound CUBO, we bring contributions to this direction of research. In this proof-of-concept study, we explore different ways of creating these q-deformed bounds that are tighter than the classical ones and we show improvements in the performance of such VAEs on the binarized MNIST dataset. Variational autoencoders (VAEs) BID10, BID4 ) are powerful Bayesian probabilistic models, which combine the advantages of neural networks with those of Bayesian inference. They consist of an encoder created with a neural network architecture, which maps the high-dimensional input data, x, to a low-dimensional latent representation, z, through the posterior probability distribution, p(z|x). Then, samples from this latent distribution are decoded back to a high-dimensional signal, through another neural network architecture and the probability distribution p(x|z). Integration performed with these probability distributions from the Bayesian framework of VAEs is intractable. As a solution, variational inference is employed to perform learning in these models, whereby a tractable bound on the model evidence is optimized instead of the intractable model evidence itself BID3. By design, the output model is set as p(x|z), usually a Bernoulli or a Gaussian probability distribution, depending on whether the target is discrete or continuous, and the prior distribution of the latent space as p(z). However, the true posterior distribution, p(z|x), remains unknown and is intractable. To solve this issue, an approximate posterior distribution, q(z|x), is learnt by means of a lower bound on the model evidence, termed the ELBO. For one data point, x (i), writing out the Kullback-Leibler divergence between the true and approximate posterior distributions and using its positivity property yields this bound: DISPLAYFORM0 The lower bound on the model evidence, the ELBO, now becomes the cost function used during the training phase of the VAEs. Over time, the first term shows how the reconstruction loss changes and the second term how far the approximate posterior is to the prior distribution. The of inference and the performance of VAEs on reconstructing and generating images heavily depend on the type of bound employed in training. A significant body of work has been carried out to replace the ELBO with tighter bounds on the model evidence. On the one hand, starting from an unbiased estimator of the true log-likelihood, the authors of BID0 derive an importance sampling estimate of the model evidence, the IWAE. This represents one of the tightest bounds of VAEs and has only recently been improved on in BID8, BID11. Increasing the number of importance samples in the IWAE objective, decreases the signal-to-noise-ratio of the gradients, which makes the learning more difficult, as the gradients suffer from a larger level of noise BID8. Several strategies are able to correct this issue. In the first algorithm, MIWAE, the outer expectation of the IWAE objective is approximated with more than one sample, as is the case in the IWAE. The second algorithm, CIWAE, represents a convex combination of the ELBO and the IWAE bounds and the third algorithm, PIWAE, separately trains the encoder and the decoder networks with different IWAE objectives. On the other hand, leveraging different divergences between the true and the approximate posterior distributions has lead to diverse bounds on the model evidence. Starting from the Rényi α-divergence BID9 between such distributions, a family of lower and upper bounds are obtained, parameterized by α BID6. However, these lower bounds become competitive with the IWAE, only in the limit α → −∞. In addition, the upper bounds suffer from approximation errors and bias and the means to select the best value of the hyperparameter α is unknown. Through an importance sampling scheme similar to the one found in the IWAE, these Rényi α bounds are tightened in BID15. If the Rényi α-divergence is replaced with the χ 2 divergence, the bound on the model evidence becomes the upper bound CUBO BID1. The Rényi α-family of bounds and others lose their interpretability as a reconstruction loss and a Kullback-Leibler divergence term that measures how close the approximate posterior is to the prior distribution. They remain just a cost function optimized during training. With different compositions of convex and concave functions, the approaches described above are unified in the K-sample generalized evidence lower bound, GLBO BID11. This study generalizes the concept of maximizing the logarithm of the model evidence to maximizing the φ-evidence score, where φ(u) is a concave function that replaces the logarithm. It allows for great flexibility in the choice of training objectives in VAEs. One particular setting provides a lower bound, the CLBO, which surpasses the IWAE objective. The aim of this work is to leverage the theory of q-deformed functions introduced in BID12, BID13, BID14, to derive tighter lower bounds on the model evidence in VAEs. To this end, our contributions are three-fold: firstly, we derive two novel lower bounds, by replacing the logarithm function in the classical ELBO, BID10, BID4, and IWAE bounds, BID0, BID7, respectively, with the q-deformed logarithm function. Values of q < 1.0 yield upper bounds of varying tightness on the classical logarithm function, as illustrated in FIG0.Secondly, we combine the information given by the upper bound CUBO, BID1, with the information given by the ELBO and the IWAE, respectively, to obtain a lower bound that is placed between the two. By the means of their construction, we hypothesize these q-deformed bounds to be closer to the true log-likelihood. We are able to confirm it in our experiments. We term our novel lower bounds the qELBO and the qIWAE.Thirdly, the tighteness of the gap between the classical logarithm function and the q-deformed one depends on the value of q, as seen in FIG0. Thus, q becomes a hyperparameter of our algorithm. Since q is a number, we can optimize it efficiently and accurately, using standard optimization algorithms. By solving for the best q for each data batch, we make q a data-driven hyperparameter, tuned in an adaptive way during training. With the q-entropy, introduced in BID12, the author developed the field of nonextensive statistical mechanics, as a generalization of traditional statistical mechanics, centered around the Boltzmann-Gibbs distribution. The S q entropy provides a generalization of this distribution, which can more accurately explain the phenomena of anomalous physical systems, characterized by rare events. In the following definitions, the original quantities can be recovered in the limit q → 1. If k > 0 is a constant, W ∈ N is the total number of possible states of a system and p i the corresponding probabilities, ∀i = 1: W, then: DISPLAYFORM0 The generalized logarithmic function, termed the q-logarithm, is introduced in BID13 as: DISPLAYFORM1 The Kullback-Leibler divergence is generalized in BID14 to the form DISPLAYFORM2 In order to derive our q-deformed bounds, we replace the logarithm function from the ELBO and IWAE bounds, with its q-deformed version. By appropriately optimizing the hyperparameter q, we will obtain an upper bound on the ELBO and IWAE, respectively: DISPLAYFORM3 DISPLAYFORM4 Optimization algorithm for q. We train a variational autoencoder with our novel qELBO and qIWAE bounds. The training procedure and the optimization method for q are identical for both types of q-deformed bounds. We will describe them in the case of the qELBO.We start the training procedure with an initial value of q = 1.0 − 10 −6. For one batch of images, we compute the qELBO lower bound and the CUBO upper bound BID1, averaged over the batch. In order to obtain a tighter lower bound, qELBO *, we set a desired value of the cost function at qELBO * = qELBO +τ · (CUBO − qELBO), where, in our experiments, τ ∈ {0.5, 0.75}.By means of the L-BFGS-B optimization method, we find the optimal value q *, such that DISPLAYFORM5 For this task, we employ the scipy optimization package in python. We apply the gradient descent step on our new, improved, cost function, qELBO *, computed with this optimal value, q *. We save this value of q for the next batch of images and we repeat the optimization steps described above, for all training batches. For the experiments conducted on the MNIST dataset BID5, we use the one-stochastic layer architecture employed in BID0 and in BID6. The encoder and the decoder are composed of two deterministic layers, each with 200 nodes, and of a stochastic layer with 50 nodes. The dimension of the latent space is equal to 50 and the activation functions are the softplus function. The approximate posterior is modeled as a Gaussian distribution, with a diagonal covariance matrix. The output model is a Bernoulli distribution for each pixel. We use the binarized MNIST dataset provided by tensorflow, with 55000 training images and 10000 test images. The learning rate is fixed at 0.005 and there is no updating schedule. To implement and test our new algorithms, we modify publicly available code 1 BID6. On the benchmark binary MNIST dataset BID5, we compare our newly derived q-deformed bounds with the ELBO and the IWAE and we show several improvements that we obtained. On the test set, we report the bounds computed with K number of samples and the true log-likelihood estimated with 5000 importance samples, logp x. The expectations involved in all of the bounds are estimated with Monte Carlo sampling. For the ELBO and the qELBO bounds, the expectation is approximated with K number of samples. The expectation in the standard IWAE is approximated with one sample. Thus, we will compute the expectation in the qIWAE with one sample, as well. Here, K refers to the number of importance samples used in the computation of the bound. In addition, we illustrate the performance of our algorithms on reconstructed binary MNIST test images and on randomly generated ones. After 3000 epochs of training, the qIWAE(τ =0.5) Figure 3: Method: qVAE(τ = 0.5) with K=50 samples. From left to right: original binary MNIST test images, reconstructed and randomly generated ones.algorithm, with the bound estimated with K=50 samples, gives the best on the importance sampling estimate of the true log-likelihood, very close to the one given by the standard IWAE. Moreover, the q-deformed bound is much closer to the estimated true value, than is the IWAE bound. We observe this behaviour for all the q-deformed bounds. This implies that, during training, optimizing the q-deformed bounds provides a cost function that is a more accurate approximation of the model evidence. Although the q-deformed ELBO does not outperform the standard IWAE, we can see significant improvements over the traditional ELBO, in all the test cases. A large decrease in the value of the bound is present for all the qELBO variants, more pronounced in the large sample regime. We addressed the challenging task of deriving tighter bounds on the model evidence of VAEs. Significant research effort has gone in this direction, with several major contributions having been developed so far, which we reviewed in the introduction. We leveraged the q-deformed logarithm function, to explore other ways of tightening the lower bounds. As well as improvements in the estimated true log-likelihood, we found that the q-deformed bounds are much closer to the estimated true log-likelihood, than the classical bounds are. Thus, training with our novel bounds as the cost function may increase the learning ability of VAEs. Through the preliminary experiments we have conducted so far, we have achieved our goal. They show that our approach has merit and that this direction of research is worth pursuing in more depth, to produce more accurate bounds and to study their impact on the performance of VAEs. As future work, similarly to BID8, we plan to investigate how the tightening the ELBO and the IWAE influences the learning process and affects the gradients and the structure of the latent space, compared with the classical case. In addition, we plan to explore different optimization strategies for q and to study its role in achieving tighter bounds. We will also apply our q-deformed bounds, to investigate the disentanglement problem in VAEs, see for example BID2. The research question addressed here is how different bounds change the structure of the latent space, to provide better or worse disentanglement scores. Finally, we would also like to test our novel bounds on all the major benchmark datasets used for assessing the performance of VAEs and compare them with other state-of-the-art bounds on the model evidence. | Using the q-deformed logarithm, we derive tighter bounds than IWAE, to train variational autoencoders. | 1,276 | scitldr |
A belief persists long in machine learning that enlargement of margins over training data accounts for the resistance of models to overfitting by increasing the robustness. Yet Breiman shows a dilemma that a uniform improvement on margin distribution \emph{does not} necessarily reduces generalization error. In this paper, we revisit Breiman's dilemma in deep neural networks with recently proposed normalized margins using Lipschitz constant bound by spectral norm products. With both simplified theory and extensive experiments, Breiman's dilemma is shown to rely on dynamics of normalized margin distributions, that reflects the trade-off between model expression power and data complexity. When the complexity of data is comparable to the model expression power in the sense that training and test data share similar phase transitions in normalized margin dynamics, two efficient ways are derived via classic margin-based generalization bounds to successfully predict the trend of generalization error. On the other hand, over-expressed models that exhibit uniform improvements on training normalized margins may lose such a prediction power and fail to prevent the overfitting. Margin, as a measurement of the robustness allowing some perturbations on classifier without changing its decision on training data, has a long history in characterizing the performance of classification algorithms in machine learning. As early as BID17, it played a central role in the proof on finite-stopping or convergence of perceptron algorithm when training data is separable. Equipped with convex optimization technique, a plethora of large margin classifiers are triggered by support vector machines BID3 BID23. AdaBoost, an iterative algorithm to combine an ensemble of classifiers proposed by BID4, often exhibits a resistance to overfitting phenomenon that during the training process the generalization error keeps on non-increasing when the training error drops to zero. Toward deciphering the such a resistance of overfitting phenomenon, BID19 proposed an explanation that the training process keeps on improving a notion of classification margins in boosting, among later works on consistency of boosting with early stopping regularization BID2 BID30 BID28. Lately such a resistance to overfitting is again observed in deep neural networks with overparameterized models. A renaissance of margin theory is proposed by BID0 with a normalization of network using Lipschitz constants bounded by products of operator spectral norms. It inspires many further investigations in various settings BID14 BID16 BID12.However, the improvement of margin distributions does not necessarily guarantee a better generalization performance, which is at least traced back to BID1 in his effort to understanding AdaBoost. In this work, Breiman designed an algorithm arc-gv such that the margin can be maximized via a prediction game, then he demonstrated an example that one can achieve uniformly larger margin distributions on training data than AdaBoost but suffer a higher generalization error. In the end of this paper, Breiman made the following comments with a dilemma: "The above leave us in a quandary. The laboratory for various arcing algorithms are excellent, but the theory is in disarray. The evidence is that if we try too hard to make the margins larger, then overfitting sets in.... My sense of it is that we just do not understand enough about what is going on."Breiman's dilemma triggers some further explorations to understand the limitation of margin theory in boosting BID18; BID27. In particular, BID18 points out that the trees found by arg-gv have larger model complexity in terms of deeper average depth than AdaBoost, suggesting that margin maximization in arc-gv does not necessarily control the model complexity. The latter works provide tighter bounds based on VC-dimension and optimized quantile training margins, which however do not apply to over-parametrized models in deep neural networks and the case where the training margin distributions are uniformly improved. In this paper, we are going to revisit Breiman's dilemma in the scenario of deep neural networks. Both the success and failure can be seen on normalized margin based bounds on generalization error. First of all, let's look at the following illustration example. Example (Breiman's Dilemma with a CNN). A basic 5-layer convolutional neural network of c channels (see Section 3 for details) is trained with CIFAR-10 dataset whose 10 percent labels are randomly permuted. When c = 50 with 92, 610 parameters, FIG0 shows the training error and generalization (test) error in solid curves. From the generalization error in (a) one can see that overfitting indeed happens after about 10 epochs, despite that training error continuously drops down to zero. One can successfully predict such an overfitting phenomenon from FIG0 (b), the evolution of normalized margin distributions defined later in this paper. In (b), while small margins are monotonically improved during training, large margins undergoes a phase transition from increase to decrease around 10 epochs such that one can predict the tendency of generalization error in (a) using large margin dynamics. Two particular sections of large margin dynamics are highlighted in (b), one at 8.3 on x-axis that measures the percentage of normalized training margins no more than 8.3 (training margin error) and the other at 0.8 on y-axis that measures the normalized margins at quantile q = 0.8 (i.e. 1/γ q,t). Both of them meet the tendency of generalization error in (a) and find good early stopping time to avoid overfitting. However, as we increase the channel number to c = 400 with about 5.8M parameters and retrain the model, (c) shows a similar overfitting phenomenon in generalization error; on the other hand, (d) exhibits a monotonic improvement of normalized margin distributions without a phase transition during the training and thus fails to capture the overfitting. This demonstrates the Breiman's dilemma in CNN. A key insight behind this dilemma, is that one needs a trade-off between the model expression power and the complexity of the dataset to endorse margin bounds a prediction power. On one hand, when the model has a limited expression power relative to the training dataset, in the sense that the training margin distributions CAN NOT be uniformly improved during training, the generalization or test error may be predicted from dynamics of normalized margin distributions. On the other hand, if we push too hard to improve the margin by giving model too much degree of freedom such that the training margins are uniformly improved during training process, the predictability may be lost. A trade-off is thus necessary to balance the complexity of model and dataset, otherwise one is doomed to meet Breiman's dilemma when the models arbitrarily increase the expression power. The example above shows that the expression power of models relative to the complexity of dataset, can be observed from the dynamics of normalized margins in training, instead of counting the number of parameters in neural networks. In the sequel, our main contributions are to make these precise by revisiting the Rademacher complexity bounds with Lipschitz constants BID0.• With the Lipschitz-normalized margins, a linear inequality is established between training margin and test margin in Theorem 1. When both training and test normalized margin distributions undergo similar phase transitions on increase-decrease during the training process, one may predict the generalization error based on the training margins as illustrated in FIG0.• In a dual direction, one can define a quantile margin via the inverse of margin distribution functions, to establish another linear inequality between the inverse quantile margins and the test margins as shown in Theorem 2. Quantile margin is far easier to tune in practice and enjoys a stronger prediction power exploiting an adaptive selection of margins along model training.• In all cases, Breiman's dilemma may fail both of the methods above when dynamics of normalized training margins undergo different phase transitions to that of test margins during training, where a uniform improvement of margins in overfitting. Section 2 describes our method to derive the two linear inequalities of generalization bounds above. Extensive experimental are shown in Section 3 and Appendix with basic CNNs, AlexNet, VGG, ResNet, and various datasets including CIFAR10, CIFAR100, and mini-Imagenet. Let X be the input space (e.g. X ⊂ R C×W ×H in image classification) and Y:= {1, . . ., K} be the space of K classes. Consider a sample set of n observations S = {(x 1, y 1),..., (x n, y n): x i ∈ X, y i ∈ Y} that are drawn i.i.d. from P X,Y. For any function f: X → R, let Pf = X f (X)dP be the population expectation and P n f = (1/n) n i=1 f (x i) be the sample average. Define F to be the space of functions represented by neural networks, DISPLAYFORM0 where l is the depth of the network, W i is the weight matrix corresponding to a linear operator on x i and σ i stands for either element-wise activation function (e.g. ReLU) or pooling operator that are assumed to be Lipschitz bounded with constant L σi and satisfying σ i = 0. For example, in convolutional network, W i x i + b i = w i * x i + b i where * stands for the convolution between input tensor x l and kernel tensor w l. We equip F with the Lipschitz semi-norm, for each f, DISPLAYFORM1 where · σ is the spectral norm and DISPLAYFORM2 For all the examples in this paper, we use ReLU activation σ i that leads to L σi = 1. Moreover we consider the following family of hypothesis mapping, DISPLAYFORM3 where [·] j denotes the j th coordinate and we further define the following class induced by Lipschitz semi-norm bound on F, DISPLAYFORM4 Lastly, rather than merely looking at whether a prediction f (x) on y is correct or not, we also consider the margin defined as ζ(f (x), y) = [f (x)] y −max {j:j =y} [f (x)] j. Therefore, we can define the ramp loss and margin error depending on the confidence of predictions. Given two thresholds γ 2 > γ 1 ≥ 0, define a ramp loss to be DISPLAYFORM5 where ∆:= γ 2 − γ 1. In particular γ 1 = 0 and γ 2 = γ, we also write γ = γ for simplicity. Define the margin error to measure if f has margin no more than a threshold γ, DISPLAYFORM6 In particular, e 0 (f (x), y) is the common mis-classification error and DISPLAYFORM7 Note that e 0 ≤ γ ≤ e γ, and γ is Lipschitz bounded by 1/γ. The central question we try to answer is, can we find a proper upper bound to predict the tendency of the generalization error along training, such that one can early stop the training near the epoch that DISPLAYFORM8 The answer is both a yes and a no!We begin with the following lemma, as a typical in multi-label classification from the uniform law of large numbers BID8. Lemma 2.1. Given a γ 0 > 0, then, for any δ ∈, with probability at least 1 − δ, the following holds for any f ∈ F with f F ≤ L, DISPLAYFORM9 is the Rademacher complexity of function class H L with respect to n samples, and the expectation is taken over x i, ε i, i = 1,..., n. Unfortunately, direct application of such bound for a constant γ 0 will suffer from the so-called scaling problem. The following proposition gives an lower bound of Rademacher complexity term, whose proof is provided in Appendix D. Proposition 1. Consider the networks with ReLU activation functions. For any L > 0, there holds, DISPLAYFORM10 where C > 0 is a constant that does not depend on S.The lemma tells us if L → ∞, upper bound becomes trivial since R n (H L) → ∞. In fact, both BID22 and BID21 show that with gradient descent, the norm of estimator's weight in logistic regression and general boosting (including exponential loss), respectively, will go to infinity at a growth rate log(t) when the data is linearly separable. As for the deep neural network with cross-entropy loss, the input of last layer is usually be viewed as features extracted from original input. Training the last layer with other layers fixed is exactly a logistic regression, and the feature is linearly separable as long as the training error achieves zero. Therefore, without any normalization, the hypothesis space along training has no upper bound on L and the upper bound is useless. Besides, even for a fixed L, the complexity term R n (H L) is computationally intractable. The first remedy is to restrict our attention on H 1 by normalizing f with its Lipschitz semi-norm f F or its upper bounds. Note that a normalized networkf = f /C has the same mis-classification error as f for all C > 0. For the choice of C, it's hard in practice to directly compute the Lipschitz semi-norm of a network, but instead some approximate estimates on the upper bound L f in are available as discussed in Appendix A. In the sequel, letf = f /L f be the normalized network and DISPLAYFORM11 be the corresponding normalized hypothesis function. Now a simple idea is to regard R n (H 1) as a constant and predict the tendency of generalization error via training margin error of the normalized network, that avoids the scaling problem and the computation of complexity term. The following theorem makes this precise. Theorem 1. Given γ 1 and γ 2 such that γ 2 > γ 1 ≥ 0 and ∆:= γ 2 − γ 1 ≥ 0, for any δ > 0, with probability at least 1 − δ, along the training epoch t = 1,..., T, the following holds for each f t, DISPLAYFORM12 where DISPLAYFORM13 Remark. In particular, when we take γ 1 = 0 and γ 2 = γ > 0, the bound above becomes, DISPLAYFORM14 Theorem 1 says, we can bound the normalized test margin distribution DISPLAYFORM15 Recently BID12 investigates for normalized networks, the strong linear relationship between cross-entropy training loss and test loss when the training epochs are large enough. As a contrast, we consider the whole training process and normalized margins. In particular, we hope to predict the trend of generalization error by choosing γ 1 = 0 and a proper γ. For this purpose, the following facts are important. First, we do not expect the bound, for example, is tight for every choice of γ > 0, instead we hope there exists some γ such that the training margin error nearly monotonically changes with generalization error. FIG1 shows the existence of such γ such that the training margin error successfully recover the tendency of generalization error on CIFAR10 dataset. Moreover, in Appendix Figure 8 shows the rank correlation between training margin error at various γ and training/test error. Second, the normalizing factor is not necessarily to be an upper bound of Lipschitz semi-norm. The key point is to prevent the complexity term of the normalized network going to infinity. Since for any constant c > 0, normalization byL = cL works in practice where the constant could be absorbed to γ, we could ignore the Lipschitz constant introduced by general activation functions in the middle layers. However, it is a natural question whether a reasonable γ with prediction power exists. A simple example in FIG0 shows, once the training margin distribution is uniformly improved, dynamic of training margin error fails to detect the minimum of generalization error in the early stage. This is because when network structure becomes complex enough, the training margin distribution could be more easily improved but the the generalization error may overfit. This is exactly the same observation in BID1 to doubt the margin theory in boosting type algorithms. More detailed discussions will be given in Section 3.2.The most serious limitation of Theorem 1 lies in we must fix a γ along the complete training process. In fact, the first term and second term in the bound vary in the opposite directions with respect to γ, and thus different f t may prefer different γ for a trade-off. As in FIG0 (b) of the example, while choosing γ is to fix an x-coordinate section of margin distributions, its dual is to look for a y-section which leads to different margins for different f t. This motivates the quantile margin in the following theorem. Letγ q,f be the q th quantile margin of the network f with respect to sample S, DISPLAYFORM16 Theorem 2. Assume the input space is bounded by M > 0, that is x 2 ≤ M, ∀x ∈ X. Given a quantile q ∈, for any δ ∈ and τ > 0, the following holds with probability at least 1 − δ for all f t satisfyingγ q,ft > τ, DISPLAYFORM17 DISPLAYFORM18 Remark. We simply denote γ q,t for γ q,ft when there is no confusion. Compared with the bound, make the choice of γ varying with f t and the cost is an additional constant term C 2 q and the constraintγ q,t > τ that typically holds for large enough q in practice. In applications, stochastic gradient descent (SGD) often effectively improves the training margin distributions along the drops of training errors, a small enough τ and large enough q usually meetγ q,t > τ. Moreover, even with the choice τ = exp(−B), constant term [log log 2 (4(M + l)/τ )]/n = O(log B/n) is still negligible and thus very little cost is paid in the upper bound. In practice, tuning q ∈ is far easier than tuning γ > 0 directly and setting a large enough q ≥ 0.9 usually provides us lots of information about the generalization performance. The quantile margin works effectively when the dynamics of large margin distributions reflects the behavior of generalization error, e.g. FIG0. In this case, after certain epochs of training, the large margins have to be sacrificed to further improve small margins to reduce the training loss, that typically indicates a possible saturation or overfitting in test error. We briefly introduce the network and dataset used in the experiments. For the network, we first consider the convolutional neural network with very simple structure basic CNN(c). The structure is shown in Appendix Figure 7. Basically, it has five convolutional layers with c channels at each and one fully connected layer, where c will be specified in concrete examples. Second, we consider more practical network structure, AlexNet BID10, VGGNet-16 BID20 and ResNet-18 BID6. For the dataset, we consider CIFAR10, CIFAR100 BID9 ) and Mini-ImageNet.The spirit of the following experiments is to show, when and how, the margin bound could be used to predict the tendency of generalization or test error along the training path? This section is to apply Theorem 1 and Theorem 2 to predict the tendency of generalization error. Let's firstly consider training a basic CNN on CIFAR10 dataset with and without random noise. The relations between generalization error and training margin error e γ (f (x), y) with γ = 9.8, inverse quantile margin 1/γ q,t with q = 0.6 are shown in FIG1. In this simple example where the net is light and the dataset is simple, the linear bounds and show a good prediction power: they stop either near the epoch of sufficient training (Left, original data) or where even an overfitting occurs (Right, 10 percents label corrupted). and CIFAR10 with 10 percents label corrupted (Right). In each figure, we show training error (red solid), training margin error γ = 10 (red dash) and inverse quantile margin (red dotted) with q = 0.6 and generalization error (blue solid). The marker "x" in each curve indicates the global minimum along epoch 1,..., T. Both training margin error and inverse quantile margin successfully predict the tendency of generalization error. A few discussions are given below.1. There exists a trade-off on the choice of γ from the linear bounds (and parallel arguments hold for q). The training margin error with a small γ is close to the training error, while a large γ is close to generalization error and it's illustrated in Appendix Figure 8 where we show the Spearman's ρ rank correlation 1 between training margin error and training error, generalization error against threshold γ. 2. The training margin error (or inverse quantile margin) is closely related to the dynamics of training margin distributions. For certain choice of γ, if the curve of training margin error (with respect to epoch) is V-shape, the corresponding dynamics of training margin distributions will have a cross-over, where the low margins have a monotonic increase and the large margins undergo a phase transition from increase to decrease, as illustrated by the red arrow in FIG0. 3. Dynamics of quantile margins can adaptively select γ t for each f t without access to the complexity term. Unlike merely looking at the training margin error with a fixed γ, quantile margin bound shows a stronger prediction power than and even be able to capture more local information as illustrated in FIG2. The generalization error curve has two valleys corresponding to a local optimum and a global optimum, and the quantile margin curve with q = 0.95 successfully identifies both. However, if we consider the dynamics of training margin errors, it's rarely possible to recover the two valleys at the same time since their critical thresholds γ t1 and γ t2 are different. Another example of ResNet is given in Appendix Figure 9. In this section, we explore the normalized margin dynamics with over-parameterized models whose expression power might be greater than data complexity. We conduct experiments in the following two scenarios.1. In the first experiment shown in FIG3, we fix the dataset to be CIFAR10 with 10 percent of labels randomly permuted, and gradually increase the channels from basic CNN to basic CNN. As the channel number increases, dynamics of the normalized training margins in the first row change from a phase transition with a cross-over in large margins to a monotone improvement of margin distributions. This phenomenon is not a surprise since with a strong representation power, the whole training margin distribution can be monotonically improved without sacrificing the large margins. On the other hand, the generalization or test error can never be monotonically improved. In the second row, heatmaps depict rank correlations of dynamics between training and test margin errors, which clearly show the phase transitions for CNN and CNN and its disappearance for CNN. 2. In the second experiment shown in 5, we compare the normalized margin dynamics of training CNN and ResNet18 on two different datasets, CIFAR100 (the simpler) and Mini-ImageNet (the more complex). It shows that: (a) CNN (5.8M parameters) does not have an over-representation power on CIFAR100, whose normalized training margin dynamics exhibits a phase transition -a sacrifice of large margins to improve small margins during training; (b) ResNet18 (11M parameters) exhibits an over-representation power on CIFAR100 via a monotone improvement on training margins, but loses such a power in Mini-ImageNet with the phase transitions in margin dynamics. More experiments including AlexNet and VGG16 are shown in Appendix FIG0.This phenomenon is not unfamiliar to us, since Breiman BID1 has pointed out that the improvement of training margins is not enough to guarantee a small generalization or test error in the boosting type algorithms. In this paper Breiman designed an algorithm, called arc-gv, enjoying an uniformly better training margin distribution comparing with Adaboost but suffer a higher generalization error. Now again we find the same phenomenon ubiquitous in deep neural networks. Dataset: CIFAR100 (Left, Middle), Mini-ImageNet (Right) with 10 percent labels corrupted. With a fixed network structure, we further explore how the complexity of dataset influences the margin dynamics. Taking ResNet18 as an example, margin dynamics on CIFAR100 doesn't have any crossover (phase transition), but on Mini-Imagenet a cross-over occurs. In the end, it's worth mentioning different choices of the normalization factor estimates may affect the range of predictability. In all experiments above, normalization factor is estimated via an upper bound on spectral norm given in Appendix A (Lemma A.1 in Section A). One could also use power iteration BID14 to present a more precise estimation on spectral norm. It turns out a more accurate estimation of spectral norm can extend the range of predictability, but Breiman's dilemma is still there when the balance between model expression power and dataset complexity is broken. More experiments on this aspect can be found in FIG0 in Appendix. In this paper, we show that Breiman's dilemma is ubiquitous in deep learning, in addition to previous studies on Boosting algorithms. We exhibit that Breiman's dilemma is closely related to the tradeoff between model expression power and data complexity. A novel perspective on phase transitions in dynamics of Lipschitz-normalized margin distributions is proposed to inspect when the model has over-representation power compared to the dataset, instead of merely counting the number of parameters. A data-driven early stopping rule by monitoring the margin dynamics is a future direction to explore. Lipschitz semi-norm plays an important role in normalizing or regularizing neural networks, e.g. in GANs BID7 BID14, therefore a more careful treatment deserves further pursuits. In this section we discuss how to estimate the Lipschitz constant bound in. Given an operator W associated with a convolutional kernel w, i.e. W x = w * x, there are two ways to estimate its operator norm. We begin with a useful lemma, Lemma A.1. For convolution operator with kernel w, i.e. W x:= w * x, there holds w * x 2 ≤ w 1 x 2.In other words, W σ ≤ w 1.Proof. DISPLAYFORM0 where the second last step is due to Cauchy-Schwartz inequality. A. 1 -norm. The convolutional operator (spectral) norm can be upper bounded by the 1 -norm of its kernels, i.e. W σ ≤ w 1. This is a simple way but the bound gets loose when the channel numbers increase. B. Power iteration. A fast approximation for the spectral norm of the operator matrix is given in BID14 in GANs that is based on power iterations BID5 ). Yet as a shortcoming, it is not easy to apply to the ResNets. We compare two estimation in Appendix FIG0. It turns out both of them have prediction power on the tendency of generalization error and both of them will fail when the network has large enough expression power. Though using 1 norm of kernel is extremely efficient, the power iteration method may be tighter and has a wider range of predictability. In the remaining of this section, we will particularly discuss the treatment of ResNets. ResNet is usually a composition of the basic blocks shown in FIG5 with short-cut structure. The following method is used in this paper to estimate the upper bound of operator or spectral norm of such a basic block of ResNet. B are mean and variance of batch samples, while keeping an online averaging asμ andσ 2. Then BN rescales x + by estimated parametersα,β and outputx =αx + +β. Therefore the whole rescaling of BN on the kernel tensor w of the convolution layer isŵ = wα/ √σ 2 + and its corresponding rescaled operator is DISPLAYFORM1 (b) Activation and pooling: their Lipschitz constants L σ can be known a priori, e.g. L σ = 1 for ReLU and hence can be ignored. In general, L σ can not be ignored if they are in the shortcut as discussed below.(d) Shortcut: In residue net with basic block in FIG5, one has to treat the mainstream (Block 2, Block 3) and the shortcut Block 1 separately. Since f + g F ≤ f F + g F, in this paper we take the Lipschitz upper bound by DISPLAYFORM2 where Ŵ i σ denotes a spectral norm estimate of BN-rescaled convolutional operator W i. In particular L σout can be ignored since all paths are normalized by the same constant while L σin can not be ignored due to its asymmetry. B STRUCTURE OF BASIC CNN The picture is slight different here, since after the first (better) local minimum, the training margin distribution is uniformly improved without reducing generalization error. Therefore, we could not expect the inverse quantile margin to reflect the tendency of generalization error globally, especially the order of two local minimums. However, around epochs when local minimum occurs, the training margin distribution still has a cross-over, and thus the inverse quantile margin could reflect the tendency locally. Lemma D.1. For any δ ∈ and bounded-value functions F B:= {f : X → R : f ∞ ≤ B}, the following holds with probability at least 1 − δ, DISPLAYFORM3 where DISPLAYFORM4 is the Rademacher Complexity of function class F.For completeness, we include its proof that also needs the following well-known McDiarmid's inequality (see, e.g.). DISPLAYFORM5 where with probability at least 1 − δ, DISPLAYFORM6 by McDiarmid's bounded difference inequality, and DISPLAYFORM7 using Rademacher complexity. To see FORMULA0, we are going to show that sup f ∈F B E nf is a bounded difference function. Consider DISPLAYFORM8 Assume that the i-th argument x i changes to x i, then for every g, g(DISPLAYFORM9 Hence sup g g(x i, x −i) − sup g g(x i, x −i) ≤ B/n, which implies that sup f ∈F B E nf is a B/nbounded difference function. Then follows from the McDiarmid's inequality (Lemma D.2) using B i = B/n and δ = exp(−2nε 2 /B 2).As to, DISPLAYFORM10 that ends the proof. We also need the following contraction inequality of Rademacher Complexity BID11 BID13. Lemma D.3 (Rademacher Contraction Inequality). For any Lipschitz function: DISPLAYFORM11 has an additional factor 2 in the contraction inequality which is dropped in BID13. Its current form is stated in BID15 as Talagrand's Lemma (Lemma 4.2).Beyond, we further introduce the family, DISPLAYFORM12 and the sub-family constraint in Lipschitz semi-norm on f, DISPLAYFORM13 The following lemma BID8 allows us to bound the Rademacher complexity term of R n (G) by R n (H), DISPLAYFORM14 where the last inequality is implied from R n ({max(f 1, . . ., f M): BID8 BID15. DISPLAYFORM15 Proof of Proposition 1. Without loss of generality, we assume L σi = 1, i = 1,..., l. Let T (r) =: {t(x) = w · x: w 2 ≤ r} be the class of linear function with Lipschitz semi-norm less than r and we show that for each t ∈ T (L/2), there exists f ∈ F with f F ≤ L and y 0 ∈ {1, . . ., K} such that where f F ≤ Π l i=1 W i σ = 2L/2 = L, and thus h ∈ H L by definition. Therefore, R n (H L) ≥ R n (T (L/2)), DISPLAYFORM0 DISPLAYFORM1 where the second equality is implied from Cauchy-Schwarz inequality and the last inequality is implied from Khintchine inequality. Proof of Theorem 1. Consider l (γ1,γ2) (ζ(f (x), y)), wheref:= f /L f is the normalized network, ζ(f (x), y) ∈ G 1. Then for any γ 2 > γ 1 ≥ 0, DISPLAYFORM0 ≤ P n (γ1,γ2) (f (x), y) + 2R n (l (γ1,γ2) • G 1 ) + log(1/δ) 2n, ≤ P n (γ1,γ2) (f (x), y) + 2 ∆ R n (G 1) + log(1/δ) 2n, ≤ P n γ1,γ2 (f (x), y) + 2K 2 ∆ R n (H 1) + log(1/δ) 2n, ≤ P n γ2 (f (x), y) + 2K 2 ∆ R n (H 1) + log(1/δ) 2n, where the first and last inequality is implied from 1[ζ < γ 1] ≤ (γ1,γ2) (ζ) ≤ 1[ζ < γ 2], the second inequality is a direct consequence of Lemma D.1, the third inequality from Rademacher Contraction Inequality (Lemma D.3) and finally the fourth equation is implied from Lemma D.4. Proof of Theorem 2. Firstly, we show after normalization, the normalize margin has an upper bound, DISPLAYFORM0, where x i = σ i (W i x i−1 + b i) with x 0 = x,W i = (W i, b i) and L σi is the Lipschitz constant of activation function σ i with σ i = 0, i = 1,..., L. Then, for normalized networkf = f /L f with DISPLAYFORM1 Therefore ζ(f (x), y) ≤ 2 f (x) 2 = 2(M + L) =: M 1, and the quantile margin is also bounded γ q,t ≤ M 1 for all q ∈, t = 1,..., T.The remaining proof is standard. For any > 0, we take a sequence of k and γ k, k = 1, 2,... by k = + log k n and γ k = M 1 2 −k. Then by Theorem 1, DISPLAYFORM2 where A k is the event P[ζ(f t (x), y) < 0] > P n [ζ(f (x), y) < γ k ] + 2K 2 γ k R(H 1) + k, and the probability is taken over samples {x 1, ...x n}. We further consider the probability for none of A k occurs, DISPLAYFORM3 2 ), ≤ 2 exp(−2n 2).Hence, fix a q ∈, for any t = 1,..., T, as long asγ q,t > 0, there exists ak ≥ 1 such that, γk +1 ≤γ q,t < γk. Therefore, DISPLAYFORM4 ⊇ P[ζ(f t (x), y) < 0] > P n [ζ(f t (x), y) <γ q,t ] + 4K 2 γ q,t R(H 1) + k +1, = P[ζ(f t (x), y) < 0] > P n [ζ(f t (x), y) >γ q,t ] + 4K 2 γ q,t R(H 1) + + log(k + 1) n, ⊇ P[ζ(f t (x), y) < 0] > P n [ζ(f t (x), y) >γ q,t ] + 4K 2 γ q,t R(H 1) + + log log 2 (2M 1 /γ q,t) n.The first inequality is implied from P n [ζ(f t (x), y) <γ q,t ] > P n [ζ(f t (x), y) < γk +1 ], since γk +1 ≤ γ q,t. The second inequality is implied fromγ q,t < 2γk +1 and thus, 1/γk +1 < 2/γ q,t. The third equality is the direct definition of k. The last inequality is implied fromk + 1 = log 2 (M 1 /γk +1) and again, 1/γk +1 < 2/γ q,t. The is proved immediately if we do a transform from to δ. | Bregman's dilemma is shown in deep learning that improvement of margins of over-parameterized models may result in overfitting, and dynamics of normalized margin distributions are proposed to predict generalization error and identify such a dilemma. | 1,277 | scitldr |
It has been an open research challenge for developing an end-to-end multi-domain task-oriented dialogue system, in which a human can converse with the dialogue agent to complete tasks in more than one domain. First, tracking belief states of multi-domain dialogues is difficult as the dialogue agent must obtain the complete belief states from all relevant domains, each of which can have shared slots common among domains as well as unique slots specifically for the domain only. Second, the dialogue agent must also process various types of information, including contextual information from dialogue context, decoded dialogue states of current dialogue turn, and queried from a knowledge base, to semantically shape context-aware and task-specific responses to human. To address these challenges, we propose an end-to-end neural architecture for task-oriented dialogues in multiple domains. We propose a novel Multi-level Neural Belief Tracker which tracks the dialogue belief states by learning signals at both slot and domain level independently. The representations are combined in a Late Fusion approach to form joint feature vectors of (domain, slot) pairs. Following recent work in end-to-end dialogue systems, we incorporate the belief tracker with generation components to address end-to-end dialogue tasks. We achieve state-of-the-art performance on the MultiWOZ2.1 benchmark with 50.91% joint goal accuracy and competitive measures in task-completion and response generation. In a task-oriented dialogue system, the Dialogue State Tracking (DST) module is responsible for updating dialogue states (essentially, what the user wants) at each dialogue turn. The DST supports the dialogue agent to steer the conversation towards task completion. As defined by Henderson et al. (2014a), a dialogue belief state consists of inform slots -information to query a given knowledge base or database (DB), and request slots -information to be returned to the users. Task-oriented dialogues can be categorized as either single-domain or multi-domain dialogues. In single-domain dialogues, humans converse with the dialogue agent to complete tasks of one domain. In contrast, in multi-domain dialogues, the tasks of interest can come from different domains. A dialogue state in a multi-domain dialogue should include all inform and request slots of corresponding domains up to the current turn. Examples of a single-domain dialogue and a multi-domain dialogue with annotated states after each turn can be seen in Figure 1. Despite there being several efforts in developing task-oriented dialogue systems in a single domain (a;), there have been limited contributions for multi-domain task-oriented dialogues. Developing end-to-end systems for multi-domain dialogues faces several challenges: Belief states in multi-domain dialogues are usually larger and more complex than in single-domain, because of the diverse information from multiple domains. Each domain can have shared slots that are common among domains or unique slots that are not shared with any. In an end-to-end system, the dialogue agent must incorporate information from source sequences, e.g. dialogue context and human utterances, as well as tracked belief states and extracted information from knowledge base, to semantically shape a relevant response with accurate information for task completion. Directly applying methods for single-domain dialogues to multi-domain dialogues is not straightforward because the belief states extend across multiple domains. A possible solution is to process a multi-domain dialogue for N D times for N D domains, each time obtaining a belief state of one domain. However, this approach does not allow learning co-references in dialogues whereby users can switch from one domain to another turn by turn. We propose an end-to-end dialogue system approach which explicitly track the dialogue states in multiple domains altogether. Specifically, we propose Multi-level Neural Belief Tracker to process contextual information for both slot-level and domain-level signals independently. The two levels are subsequently combined to learn multi-domain dialogue states. Our dialogue state tracker enables shared learning of slots common among domains as well as learning of unique slots in each domain. we utilize multi-head attention layers to comprehensively process various types of information: dialogue context, user utterances, belief states of both inform and request slots, and DB query . The multi-head structure allows the model to independently attend to the features over multiple representation sub-spaces; and we combine all components to create a dialogue system from state tracking to response generation. The system can be jointly learned in an end-to-end manner. Our end-to-end dialogue system utilizes supervision signals of dialogue states and output responses without using system action annotation. To comprehensively validate our method, we compare our models with baselines in end-to-end, DST, and context-to-text generation settings. We achieve the state-of-the-art performance in DST, task-completion, and response generation in the MultiWOZ2.1 corpus ) as compared to other baselines in similar settings. In context-to-text generation setting that allows supervision of dialogue acts, our models can achieve competitive measures of Inform and BLEU metric. Our work is related to 2 main bodies of research: DST and end-to-end dialogue systems. Prior DST work focuses on single-domain dialogues using WOZ and DSTC2 (a) corpus. (Mrkšić et al., 2015; b;) address transfer learning in dialogues from one domain to another rather than multiple domains in a single dialogue. Our work is more related to recent effort for multi-domain DST such as; a;. These models can be categorized into two main categories of DST: fixed-vocabulary and open-vocabulary approach. Fixed vocabulary models assume known slot ontology with a fixed candidate set for each slot. Open-vocabulary models (; a;) derive the candidate set based on the source sequence i.e. dialogue history, itself. Our approach is more related to open-vocabulary approach as we aim to dynamically generate dialogue state based on input dialogue history. Different from most prior work, our Multi-level Neural Belief Tracker can learn domain-level and slot-level signals independently and both are combined in a Late Fusion manner to obtain contextual representations of all (domain, slot) pairs. Conventionally, an end-to-end dialogue system is composed of separate modules for Natural Language Understanding (NLU) , DST (b;), Dialogue Policy (;, and Natural Language Generator (NLG) (a;). These components can be learned independently and combined into a pipeline architecture for end-to-end system (; ; . Another line of research aims to develop a dialogue agent without modularizing these components but incorporating them into a single network (; ; ; b). Our work is more related to the latter approach whereby we incorporate conventional components into an integrate network architecture and jointly train all parameters. However, following , we consider a separate module that combines NLU and DST together. The module utilizes additional supervision for more fine-grained tracking of user goals. This strategy is also suitable for large-scale knowledge base with large number of entities. (; b;) completely omit the DST component by formulating entity attributes into memory form based on (Subject, Relation, Object) tuples. These models achieve good performance in small-scale corpus such as In-Car Assistant and WOZ2.0 but will become extremely hard to scale to large knowledge base in multi-domain setting such as MultiWOZ corpus. Given a dialogue with dialogue history of t − 1 turns, each including a pair of user utterance and system response, (U 1, S 1),..., (U t−1, S t−1), the user utterance at current dialogue turn U t, and a knowledge base in form of entity data tables, the goal of a task-oriented dialogue system is to generate a response S t that is not only appropriate to the dialogue context, but also task-related with the correct entity for the users. In the multi-domain dialogue setting, turns in the dialogue history and the current user utterance could come from different domains. Therefore, the generated response in this setting should also be domain-related with the correct domain-specific information for the users. We propose a novel Multi-level Neural Belief Tracker to track belief states at both domain level and slot level to address multi-domain dialogues. Following , we utilize the previous belief states B t−1 as an input to the model. This allows the model to rely on the dialogue states detected from the previous dialogue step t − 1 to update the state of the current step t. In addition, we adopt the attention-based principle of Transformer network and propose an end-to-end architecture for task-oriented dialogues. Our model allows comprehensive information processing from different input sources, incorporating contextual features from dialogue context and user utterance as well as learning signals from domain-level and slot-level dialogue states. Our solution consists of 3 major components: (i) Encoders encode sequences of dialogue history, current user utterances, target system responses, domain and slot names, and previous dialogue belief states, into continuous representations. (ii) Multi-level Neural Belief Tracker includes 2 modules, one for processing slot-level information and one for domain-level information. Each module comprises attention layers to project domain or slot token representations and attend on relevant features for state tracking. The outputs of the two modules are combined to create domain-slot joint feature representations. Each feature representation is used as a context-aware vector to decode the corresponding inform or request slots in each domain. (iii) Response Generator projects the target system responses and incorporates contextual information from dialogue context as well as intermediate variables from the state tracker and DB query . Employing attention mechanisms with feed-forward and residual connections allows our models to focus on relevant parts of the inputs and pass on the relevant information to decode appropriate system responses. We combine all the modules into an end-to-end architecture and jointly train all components. An overview of the proposed approach can be seen in Figure 2. An encoder encodes a text sequence of tokens (x 1, ..., x n) to a sequence of continuous representation z = (z 1, ..., z n) ∈ R n×d. Each encoder includes a token-level trainable embedding layer and layer normalization . Depending on the type of text sequences, we inject sequential characteristics of the tokens (i.e. their positions in the sequence) using a sine and cosine positional encoding functions . Element-wise summation is used to combine the token- level embedding with positional embedding, each has the same embedding dimension d. The current user utterance U t is tokenized, prepended and appended with sos and eos token respectively. In the dialogue history, each human utterance and system response up to dialogue step t − 1 is processed similarly. The tokenized past utterances and system responses are concatenated sequentially by the dialogue step. For target system response S t, during training, the sequence is offset by one position to ensure that token prediction in generation step i is based on the previous positions only i.e. 1,..., i − 1. Denoting name sloti and value sloti as the slot name and slot value of slot i, we create sequences of dialogue belief state from previous turn by following the template: value sloti inf _name sloti... req_name slotj... domain d... A req_name slotj is only included in the sequence if slot j is predicted as in the previous turn. As a slot such as area can be both request or inform type, the 2 slot types are differentiated by the prefixes inf and req. Our belief sequences can be used to encode past dialogue states of multiple domains, each separated by the domain d token. To learn slot-level and domain-level signals for state tracking, we construct set of slot and domain tokens as input to the state tracker. Each input set is created by concatenating slot names or domains: respectively. Both sequences are kept fixed in all dialogue samples to factor in all possible domains and slots for multi-domain state tracking. Positional encoding is used in all sequences except for input sets of slot and domain tokens as these sets do not contain sequential characteristic. Embedding weights are shared among all the encoders of source sequences. Embedding weights of the target system responses are not shared to allow the models to learn the semantics of input and output sequences differently. The DST module processes slot-level and domain-level information independently, and integrates the two for multi-domain state tracking. We adopt a Late Fusion approach to combine domain and slot representations in deeper network layers. Slot-level Processing. Given the encoded features from the source sequences, including dialogue history z his, previous belief state z bs, and the current user utterance z utt, the slot-level signals are learned by projecting the encoded slot token sequence z S through N S dst identical layers. Each layer contains 4 attention blocks, each of which employ the multi-head attention mechanism to attend on the inputs at different representation sub-spaces. Each attention block is coupled with a position-wise feed-forward layer, including 2 linear transformations with ReLU activation in between. Residual connection and layer normalization are employed in each attention block. Specifically, given the current feature vector z out S as output from previous attention block (or z S itself in the first attention block of the first processing layer) and the encoded features z seq of a source sequence, the multi-head attention is defined as: where, and seq = {S, his, bs, utt} (for simplicity, the subscripts of S and seq are omitted in each W). The first attention block is a self-attention, i.e. seq = S, which enables learning the relation between slots independently from domains. Subsequent attention layers on dialogue context, previous belief state, and user utterance of current turn, inject each slot token representation with dialogue contextual information up to current user utterance in turn t. Through residual connection, the contextual information are passed forward in each z out S. Using different attention blocks allows flexible processing of information from various input sources. Domain-level Processing. The input to the domain-level processing module includes the encoded domain token sequence z D, the encoded dialogue history up to turn t − 1 z his, and the encoded user utterance of current turn z utt. The domain features are passed through N D dst identical layers, each of which include 3 multi-head attention blocks to obtain important contextual information from dialogue context and user utterance. Similarly to slot-level processing, a self-attention block is leveraged to allow reasoning among domains independently from slots. Attending on dialogue history and current user utterance separately enables learning domain signals from the contextual information of past dialogue turns and current turns differently. Therefore, the models can potentially detect changes of dialogue domains from past turns to the current turn. Especially in multi-domain dialogues, users can switch from one domain to another and the generated responses should address the latest domain. d is used to decode the corresponding domain-specific slot i. The vector is used as initial hidden state for an RNN decoder to decode an inform slot token by token or passed through a linear transformation layer for binary classification for a request slot. The decoded dialogue states are used to query the DBs of all domains and obtain the number of the entities in each domain. We then create a fixed-dimensional one-hot pointer vector for each domain d: z We embed the pointer vector with the learned embedding and positional embedding as similarly described in Section 3.1, ing in z db ∈ R 6N D ×d. The DB pointer vector z db, context-aware domain-slot joint features z out DS, encoded dialogue history z his, and user utterance of current turn z utt, are used as inputs to incorporate relevant signals to decode system responses. The generator includes N gen identical layers, each includes 5 multi-head attention blocks, including a self-attention block at the beginning. Adopting attention with residual connection in each block allows the models to comprehensively obtain contextual cues, either through text sequences or domain-slot joint features, and knowledge base signals from DB pointer vectors. The final output z out gen is passed to a linear transformation with softmax activation to decode system responses. The objective function is a combination of belief state objectives, including the log-likelihood of all inform slot sequences S inf, and the binary cross entropy of request slots S req, and the system response objective, including the log-likelihood of the target response sequence T, as follows: where The above objectives are conditioned on the input features, including dialogue context C, current user utterance U, previous and current belief state B t−1 and B t, and DB queries Q. 4.1 DATA We used the MultiWOZ 2.1 dataset ) which consists of both single-domain and multi-domain dialogues. Compared to version 2.0, MultiWOZ 2.1 is improved with some correction of DST labels, including about 40% changes across training samples. We pre-processed the dialogues by tokenizing, lower-casing, and delexicalizing all system responses. From the belief state annotation of the training data, we identified all possible domains and slots. We identified N D = 7 domains and N S = 35 unique inform slots in total. We followed the preprocessing scripts as provided by b). The corpus includes 8,438 dialogues in the training with an average of 1.8 domains per dialogue. Each dialogue has more than 13 turns. There are 1,000 in each validation and test set, each including an average of 1.9 domains per dialogue. Other details of data pre-processing procedures, domains, slots, and entity DBs, are included in Appendix A.1. The model parameters are: We employed dropout of 0.3 at all network layers except the linear layers in the generative components. Label smoothing for target system responses is applied during training. During training, we utilize teacher-forcing learning strategy by simply using the ground-truth inputs of dialogue state from previous turn and the gold DB pointer. During inference, in each dialogue, we decode system responses sequentially turn by turn, using the previously decoded belief state as input in the current turn, and at each turn, using the decoded belief state to query DBs for pointer vectors. We train all networks in an end-to-end manner with Adam optimizer and the learning rate schedule similarly adopted by. We used batch size 32 and tuned the warmup_steps from 9K to 15K training steps. All models are trained up to 30 epochs and best models are selected based on validation loss. We used a greedy approach to decode all slots and beam search with beam size 5 and a length penalty 1.0 to decode responses. The maximum length is set to 10 tokens for each slot and 20 for system responses. Our models are implemented using PyTorch . To evaluate the models, we use the following metrics: DST metrics: Joint Accuracy and Slot Accuracy (b). Joint Accuracy compares the predicted dialogue states to the ground truth in each dialogue turn. All slot values must match the ground truth labels to be counted as a correct prediction. Slot Accuracy considers individual slot-level accuracy across the topology. Task-completion metrics: Inform and Success . Inform refers to system ability to provide an appropriate entity while Success is the system ability to answer all requested attributes. Generation metrics: BLEU score . We ran all experiments 3 times and reported the average . We report in 2 different settings: end-to-end dialogues and DST. In end-to-end setting, we train a dialogue agent that is responsible for both DST and text generation without assuming access to ground-truth labels. End-to-End. In this setting, we compare our model performance on the joint task of DST and context-to-text generation. For fair comparision, we select TSCP as the baseline as TSCP does not use additional supervision signals of system action as input. This is the current state-of-the-art for end-to-end dialogue task in the single-domain WOZ . TSCP applies pointer network to develop a two-stage decoding process to decode belief states, in a form of text sequence, and subsequently decode system responses. We adapt the method to the multi-domain dialogue setting. We experiment with 2 cases of TSCP in which the maximum length of the output state sequence L bspan is set to 8 and 20 tokens. As can be seen in Table 1, our model outperforms in all metrics, except for the Slot Acc metric in one case. Overall, our model performs well in both multi-domain and single-domain dialogues, especially with higher performance gain in multi-domain dialogues. The performance gain in multi-domain dialogues can be explained by the separate network structure between domain and slot processing modules in our models. This allows our models to learn domain-dependent and slot-dependent signals separately before the two are fused into a joint feature vectors for downstream process. For TSCP, increasing the L bspan from 8 to 20 tokens helps to improve the performance, but also increases the training time to convergence significantly. In our approach, all inform and request slots are decoded independently and the training time is less affected by the size of the target dialogue states, especially in cases of extensive belief states (e.g. 4 or 5 domains in a dialogue). Additional by individual domains are described in Appendix A.3. DST. We isolate the DST components (i.e. training models only with L(B t)) and report the DST performance. We compare the performance with the baseline models on the MultiWOZ 2.1 in Table 2 (Refer to Appendix A.2 for more description of DST baselines). Our model outperforms existing baselines and achieves the state-of-the-art performance in MultiWOZ2.1 corpus. By leveraging on dialogue context signals through independent attention modules at domain level and slot level, our DST can generate slot values more accurately. DST approaches that try to separate domain and slot signals such as TRADE (a) reveal competitive performance. However, our approach has better performance as we enable deeper interaction of context-related signals in each domain and slot representation. Compared to TRADE, our approach can be considered as Late Fusion approach that combines representations in deeper network layers for better joint features of domains and slots. We also noted that DST performance improves when our models are trained as an end-to-end system. This can be explained as additional supervision from system responses not only contributes to learn a good response generation network but also positively impact DST network. Additional DST of individual domains can be seen in Appendix A.3. For completion, we also conduct experiment for context-to-text generation setting and compare with baseline models in Appendix A.3. We experiment with different model variants in Table 3. First, we noted that removing selfattention on the joint feature domain-slot vectors (N DS dst = 0) reduces the joint accuracy performance. This self-attention is important because it allows our models to learn signals across (domain, slot) joint features rather than just at independently domain level and slot level. Second, ranging the number of attention layers in domain-level processing and slot-level processing from 3 to 1 gradually reduces the model performance. This shows the efficacy of our Late Fusion approach. Combining the Joint Accuracy HJST 35.55% DST Reader 36.40% TSCP 37.12% FJST 38.00% HyST 38.10% TRADE (a) 45.60% Ours 49.55% features at deeper network layers in better joint feature representation and hence, increases the model performance. Lastly, we observed that our models can efficiently detect contextual signals from the dialogue states of previous turn PrevBS as the performance of our models with or without using the full dialogue history is very similar. This will benefit as the dialogue history evolves over time and our models only need to process the latest dialogue turn in combination with the predicted dialogue state in previous turn as an input. Qualitative Analysis. We examine an example dialogue in the test data and compare our predicted outputs with the baseline TSCP (L bspan = 20) and the ground truth. From the table in the left of Figure 3, we observe that both our predicted dialogue state and system response are more correct than the baseline. Specifically, our dialogue state can detect the correct type slot in the attraction domain. As our dialogue state is correctly predicted, the queried from DB is also more correct, ing in better response with the right information (i.e. 'no attraction available'). From visualization of domain-level and slot-level attention on the user utterance, we notice important tokens of the text sequences, i.e.'entertainment' and'close to', are attended with higher attention scores. In addition, at domain-level attention, we find a potential additional signal from the token'restaurant', which is also the domain from the previous dialogue turn. We also observe that attention is more refined along the neural network layers. For example, in the domain-level processing, compared to the 2 nd layer, the 4 th layer attention is more clustered around specific tokens of the user utterance. The complete predicted output for this example dialogue and other qualitative analysis can be seen in Appendix A.4. In this work, we proposed an end-to-end dialogue system with a novel Multi-level Neural Belief Tracker. Our DST module can track complex belief states of multiple domains and output more accurate dialogue states. The DST is combined with attention-based generation module to generate dialogue responses. Evaluated on the large-scale multi-domain dialogue benchmark MultiWOZ2.1, our models achieve the state-of-the-art performance in DST and competitive measures in taskcompletion and response generation. Figure 3: Example dialogue with the input system response St−1 and current user utterance Ut, and the output belief state BSt and system response St. Compared with TSCP (Row 3), our dialogue state and response (Last Row) are more correct and closer to the ground truth (Row 2). Visualization of attention to the user utterance sequence at slot-level (lower right) and domain-level (upper right) is also included. More red denotes higher attention score between domain or slot representation and token representation. Best viewed in color. A.1 DATA PRE-PROCESSING First, we delexicalize each target system response sequence by replacing matched entity attribute appeared in the sequence to the canonical tag domain_attribute. For example, the original target response'the train id is tr8259 departing from cambridge' is delexicalized into'the train id is train_id departing from train_departure'. We use the provided entity databases (DBs) to match potential attributes in all target system responses. For dialogue history, we keep the original version of all text, including system responses of previous turns, rather than the delexicalized form. We split all sequences of dialogue history, user utterances of current turn, previous belief states, and delexicalized target responses, into case-insensitive tokens. We share the embedding weights of all source sequences. For source sequences, in total there are 5,491 unique tokens, including slot and domain tokens as well as eos, sos, pad, and unk tokens. For target sequences, there are 2,648 unique tokens in total, including all canonical tags as well as eos, sos, pad, and unk tokens. As can be seen in Table 4, in source sequences, the overlapping rates of unique tokens to the training embedding vocabulary are about 64% and 65% in validation and test set respectively. For target sequences, the overlapping rates are about 83% and 82% in validation and test set respectively. As we analyze the data, we summarize the number of dialogues in each domain in Table 5. For each domain, a dialogue is selected as long as the whole dialogue (i.e. single-domain dialogue) or parts of the dialogue (i.e. in multi-domain dialogue) is involved with the domain. For each domain, we also build a set of possible inform and request slots using the belief state annotation in the training data. The details of slots, entity attributes, and DB size, in each domain, can be seen in Table 6. The DBs of 3 domains taxi, police, and hospital were not provided in the benchmark. We describe a list of baseline models in DST setting and context-to-text generation setting. FJST and HJST . FJST and HJST follow a fixed-vocabulary approach for state tracking. Both models include encoder modules (either bidirectional LSTM or hierarchical LSTM) to encode the dialogue history. The models pass the context hidden states to separate linear transforma- tion to obtain final vectors to predict individual state slots separately. The output vector is used to measure a score of a predefined candidate set for each slot. TSCP . TSCP is an end-to-end dialogue system that can do both DST and NLG. The model utilize pointer network to generate both dialogue states and responses. To compare with TSCP in DST setting, we adapt the model to multi-domain dialogues and report the only on DST components. For DST experiment, we reported the performance when the maximum length of dialogue state sequence in the state decoder L is set to 20 tokens. DST Reader . This model considers the DST task as a reading comprehension task. The model predicts each slot as a span over tokens within dialogue history. DST Reader utilizes attention-based neural network with additional modules to predict slot type and slot carryover probability. HyST. This baseline combines the advantage of both fixed-vocabulary and open-vocabulary approaches. In open-vocabulary, the set of candidates of each slot is constructed based on all word n-grams in dialogue history. Both approaches are applied in all slots and depending on their performance in validation set, the better approach is applied to predict individual slots. TRADE (a). This is the current state-of-the-art model on the MultiWOZ2.1 dataset. The model combines pointer network to generate individual slot token-by-token. The prediction is additional supported by a slot gating component that decides whether the slot is "none", "dontcare", or "pointer" (generated). provides a baseline for this setting by following the sequence-to-sequence model with additional signals from the belief tracker and discrete data pointer vector. TokenMoE . TokenMoE refers to Token-level Mixture-of-Expert model. The model follows a modularized approach by separating different components known as expert bots for different dialogue scenarios. A dialogue scenario can be dependent on a domain, a type of dialogue act, etc. A chair bot is responsible for controlling expert bots to dynamically generate dialogue responses. HDSA . This is the current state-of-the-art for context-to-text generation setting in MultiWOZ2.0. HDSA leverages the structure of dialogue acts to build a multi-layer hierarhical graph. The graph is incorporated as an inductive bias in self-attention network to improve the semantic quality of generated dialogue responses. Structured Fusion . This approach follows a traditional modularized dialogue system architecture, including separate components for NLU, DM, and NLG. These components are pretrained and combined into an end-to-end system. Each component output is used as a structured input to other components. LaRL . This model uses a latent dialogue action framework instead of traditional handcrafted dialogue acts. The latent variables are learned using unsupervised learning with stochastic variational inference. The model are trained in a reinforcement learning framework whereby the parameters are trained to yield better Success rate. Domain-Specific Results. In Table 7 and 8, we presented additional of our model and the baselines TSCP . For state tracking, the metrics are calculated for domain-specific slots of the corresponding domain at each dialogue turn. For task completion and response generation, we calculated the metrics for single-domain dialogues of the corresponding domain. We do not report the Inform metric for the taxi domain because no DB was provided in the benchmark for this domain. From Table 7, in each domain, our approach outperforms TSCP across most of the metrics, except the Success and BLEU metric in the taxi domain. In term of task-completion, our model performs better significant improvement in domains with large DB sizes such as train and restaurant. In term of response generation, our are consistently higher than the baselines as the model can return more appropriate responses from better decoded dialogue states and DB queried . For state tracking task alone, in Table 8, our models perform consistently in the 3 domains attraction, restaurant, and train domains. However, the performance significantly drops in the taxi domain. This performance drop negatively impacts the overall performance across all domains. We plan to investigate further to identify and address challenges in this particular domain in future work. Context-to-Text Generation. , to compare with baselines in this setting, we assumes access to the ground-truth labels of dialogue belief states and data pointer during inference. We compare with existing baselines in Table 2 (Refer to Appendix A.2 for more description of the baselines). Our model achieves the state-of-the-art in the Inform metric but do not perform as well in terms of Success metric. We achieve a competitive BLEU score, only behind the current state-of-the-art HDSA model. An explanation for our model not able to achieve a high Success metric is that we did not utilize the dialogue act information. The current state-of-the-art HDSA leverages the graph structure of dialogue acts into dialogue models. Furthermore, compared to approaches such Table 9: Performance for context-to-text generation setting on MultiWOZ2.0. The baseline are as reported in the benchmark leaderboard. Inform Success BLEU Baseline 71.29% 60.96% 18.80 TokenMoE 75.30% 59.70% 16.81 HDSA 82.90% 68.90% 23.60 Structured Fusion 82.70% 72.10% 16.34 LaRL 82.78% 79.20% 12.80 Ours 83.83% 67.36% 19.88 as , our model does not use pretrained network modules such as NLU and DST. Our end-to-end setting is more related along the line of research work for end-to-end dialogue systems without relying on system action annotation (; ; b). To improve the Success metric, we plan to extend our work in the future that can derive better dialogue policy for higher Success rate. In Table 10, we reported the complete output of an example multi-domain dialogue. Overall, our dialogue agent can carry a proper dialogue with the user throughout the dialogue steps. Specifically, we observed that our model can detect new domains at dialogue steps where the domains are introduced e.g. attraction domain at the 5 th turn and taxi domain at the 8 th turn. The dialogue agent can also detect some of the co-references among the domains. For example, at the 5 th turn, the dialogue agent can infer the slot area for the new domain attraction as the user mentioned'close the restaurant'. We noticed that that at later dialogue steps such as the 6 th turn, our decoded dialogue state is not correct possibly due to the incorrect decoded dialogue state in the previous turn, i.e. 5 th turn. In Figure 4 and 5, we reported the Joint Goal Accuracy and BLEU metrics of our model by dialogue turn. As we expected, the Joint Accuracy metric tends to decrease as the dialogue history extends over time. The dialogue agent achieves the highest accuracy in state tracking at the 1 st turn and gradually reduces to zero accuracy at later dialogue steps, i.e. 15 th to 18 th turns. For response generation performance, the trend of BLEU score is less obvious. The dialogue agent obtains the highest BLEU scores at the 3 rd turn and fluctuates between the 2 nd and 13 th turn. | We proposed an end-to-end dialogue system with a novel multi-level dialogue state tracker and achieved consistent performance on MultiWOZ2.1 in state tracking, task completion, and response generation performance. | 1,278 | scitldr |
Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative. In this paper,we connect a general family of learning objectives including score matching to Wassersteingradient flows. This connection enables us to design a scalable approximation to theseobjectives, with a form similar to single-step contrastive divergence. We present applications in training implicit variational and Wasserstein auto-encoders with manifold-valued priors. Unnormalized models define the model distribution as q(x; θ) ∝ exp(−E(x; θ)), where E(x; θ) is an energy function that can be parameterized by e.g. DNNs. Unnormalized models can be used directly for density estimation, but another important application is in gradient estimation for implicit variational inference, where we can use score estimation in latent space to approximate an intractable learning objective. This approach leads to improved performance in training implicit auto-encoders . Maximum likelihood estimation for unnormalized models is intractable, and score matching (Hyvärinen, 2005) is a popular alternative. Score matching optimizes the Fisher divergence where we denote the data distribution as p. Hyvärinen shows D F is equivalent to E p(x) ∆ log q(x; θ) + 1 2 ∇ log q(x; θ) 2, where ∆ = i ∂ 2 i is the Laplacian; the equivalent form can be estimated using samples from p. So far, when E has a complex parameterization, calculating the equivalent objective is still difficult, as it involves the second-order derivatives; and in practice, people turn to scalable approximations of the score matching objective (; ;) or other objectives such as the kernelized Stein discrepancy (KSD; b;). However, these approximations are developed on a case-by-case basis, leaving important applications unaddressed; for example, there is a lack of scalable learning methods for models on manifolds . In this work, we present a unifying perspective to this problem, and derive scalable approximations for a variety of objectives including score matching. We start by interpreting these objectives as the initial velocity of certain distribution-space gradient flows, which are simulated by common samplers. This novel interpretation leads to a scalable approximation algorithm for all such objectives, reminiscent to single-step contrastive divergence (CD-1). We refer to any objective bearing the above interpretation as above as a "minimum velocity learning objective", a term coined in the unpublished work. Our formulation is a distribution-space generalization of their work, and applies to different objectives as the choice of distribution space varies. Another gap we fill in is the development of a practically applicable algorithm: while the idea of approximating score matching with CD-1 is also explored in , previously the approximation suffers from an infinite variance problem, and is thus believed to be impractical ; we present a simple fix to this issue. Additionally, we present an approximation to the objective function instead of its gradient, thus enabling the use of regularization like early-stopping. Other related work will be reviewed in Appendix C. One important application of our framework is in learning unnormalized models on manifolds. This is needed in areas such as image analysis , geology and bioinformatics . Moreover, as we present an approximation to the Riemannian score matching objective, it enables flexible inference for VAEs and WAEs with manifold-valued latent variables, as it enables gradient estimation for implicit variational distributions on manifolds. It is believed that auto-encoders with a manifold-valued latent space can capture the distribution of certain types of data better (; ;). As we will see in Section 3, our method leads to improved performance of VAEs and WAEs. We now present our framework, which concerns all learning objectives of the following form: where q θ is the model distribution, {p t} is the gradient flow of KL q in a suitable distribution space (e.g. the 2-Wasserstein space), and KL q is the exclusive KL divergence functional, p → KL(p q θ). We refer to these objectives as "minimum velocity learning (MVL) objectives", since will show that they correspond to the initial velocity of the gradient flow. subsumes the Fisher divergence for score matching as a special case, since from the properties of the 2-Wasserstein space (please refer to Appendix A for the necessary preliminary knowledge), we have D F (p|q) = 1 2 grad p KL q 2, where the gradient and norm are defined in the 2-Wasserstein space, and the data manifold X is endowed with the Euclidean metric. Rearranging terms, we get i.e., score matching is a special case of the MVL objective, when the space of distributions is chosen as the 2-Wasserstein space P(X). In certain cases, the gradient flow of KL q corresponds to common samplers, and can be efficiently simulated: e.g., the gradient flow in P(X) is the (Riemannian) Langevin dynamics. Now we utilize this connection to design a scalable approximation to these objectives. First, note holds regardless of the chosen space of distributions. Denote As the first term in is independent of θ, the MVL objective is always equivalent to the second term. We approximate it by simulating a modified gradient flow: letp t be the distribution obtained by running the sampler targeting q 1/2. Then can be approximated by replacing the limit with a fixed, and running the corresponding sampler starting from a mini-batch of training data. The approximation becomes unbiased when → 0. A Control Variate The approximation to will have small bias. However, it suffers from high variance when the sampler consists of Itô diffusion (e.g. when it is Langevin dynamics). Fortunately, we can solve this problem with a control variate. To illustrate this, suppose the MVL objective is defined using Langevin dynamics (LD). Without loss of generality, we assume we use a batch size of 1 in the approximation, so approximation isL, where x + is sampled from the training data, and Z ∼ N (0, I). By Taylor expansion 1, and as → 0, VarL = Θ −1 → ∞. Thus a control variate is needed. In this LD example, the control variate is More generally, the control variate is always the inner product of ∇ x E(x +) and the diffusion term in the sampler. As a side product of our work, we note that similar control variate can be obtained for CD-1 and denoising score matching, and it solves their problem. See Appendix B.2. As an application, let us consider learning unnormalized models on Riemannian manifolds. In this case, the gradient flow of KL q in the 2-Wasserstein space becomes the Riemannian Langevin dynamics , and the approximate MVL objective becomes is a sample from the Riemannian LD, and z ∼ N (0, G −1 (y)). This is the Riemannian score matching objective , which also has the form of, but the norm is defined by the Riemannian metric of X. From this example, we can see the power of our framework, which enables us to approximate new objectives with ease. We now apply our approximation to learning implicit variational auto-encoders (VAEs) and Wasserstein auto-encoders (WAEs) with manifold-valued prior. First we review the use of score estimator in learning implicit auto-encoding models. We use VAE as an example; for WAE with KL penalty, the derivation is similar, see. The VAE objective is E p(x) E q(z|x;φ) log p(z)p(x|z;θ) q(z|x;φ), where q(z|x; φ) is the push-forward measure of N (0, I) by f (·; φ). The objective is intractable, as the entropy term H[q(z|x; φ)] is intractable; however, we can show that Thus to approximate the objective, it suffices to approximate the score function ∇ z log q(z). This can be implemented by learning an unnormalized model using score matching. As the score matching objective directly aligns the learnt score function ∇ z E to the data score, it can be viewed as score estimation using conservative fields, thus it leads to a better approximation to the gradient ∇ φ H[q(z)] compared to indirect approximations such as the adversarial density ratio estimators. As we turn to the case where the latent space is an embedded manifold, the original score matching objective can no long be used, since q(z) no longer has a density w.r.t. the Lebesgue measure in the embedded space. However, we can still do score estimation on the manifold, i.e. estimate the log derivative of the density w.r.t. the manifold Hausdorff measure. This can be done by fitting an unnormalized model on manifold, using the approximate objective developed in Section 2.2. We note that in this case, still holds, and we can still estimate the gradient of the ELBO using the score estimate. See Appendix E.4 for details. Empirical Evaluations We apply our method to train implicit hyperspherical VAEs with implicit encoders and WAEs, on the MNIST dataset. Our experiment setup follows , with the exception that we parameterize an energy network E(z; ψ) and uses its gradient as the score estimate, instead of parameterizing a score network. Detailed setup and additional synthetic experiments are in Appendix D. For the VAE experiment, we compare with VAEs with explicit variational posteriors, as well as Euclidean VAEs, and report negative log likelihood estimated with annealed importance sampling ; for the WAE experiment, we compare with WAE-GAN , and report the FID score . The are summarized in Figure 1. We can see that in all cases, hyperspherical prior outperforms the Euclidean prior, and our method leads to improved performance. Interestingly, for VAEs with explicit encoders, hyperspherical VAE could not match the performance of Euclidean VAE in high dimensions; this is consistent with the in , who incorrectly conjectured that hyperspherical prior is inadequate in high dimensions; we can see that the problem is actually the lack of flexibility in inference, which our method addresses. In this section, we review knowledge needed in this work, most importantly Wasserstein gradient flow and its connection to sampling algorithms. A (differential) manifold M is a topological space locally diffeomorphic to an Euclidean or Hilbert space. A manifold is covered by a set of charts, which enables the use of coordinates locally, and specifies a set of basis {∂ i} in the local tangent space. A Riemannian manifold further possesses a Riemannian structure, which assigns to each tangent space T p M an inner product structure. The Riemannian structure can be described using coordinates w.r.t. local charts. The manifold structure enables us to differentiate a function along curves. Specifically, consider a curve c: [0, T] → M, and a smooth function f: M → R. At c(t) ∈ M, a tangent vector A tangent vector field assigns to each p ∈ M a tangent vector V p ∈ T p M. It determines a flow, a set of curves {φ p (t): p ∈ M} which all have V φp(t) as their velocity. On Riemannian manifolds, the gradient of a smooth function f is a tangent vector field p → grad p f such that It determines the gradient flow, which generalizes the Euclidean-space notion dx = ∇ x f (x)dt. We will work with two types of manifolds: the data space X when we apply our method to manifold-valued data, and the space of probability distributions over X. On the space of distributions, we are mostly interested in the 2-Wasserstein space P(X), a Riemannian manifold. The following properties of P(X) will be useful for our purposes : 1. Its tangent space T p P(X) can be identified as a subspace of the space of vector fields on X; the Riemannian metric of P(X) is defined as for all p ∈ P(X), X, Y ∈ T p P(X); the inner product on the right hand side above is determined by the Riemannian structure of X. We will also consider a few other spaces of distributions, including the Wasserstein-Fisher-Rao space , and the H-Wasserstein space introduced in . On the data space, we need to introduce the notion of density, i.e. the Radon-Nikodym derivative w.r.t. a suitable base measure. The Hausdorff measure is one such choice; it reduces to the Lebesgue measure when X = R n. In most cases, distributions on manifolds are specified using their density w.r.t. the Hausdorff measure; e.g. "uniform" distributions has constant densities in this sense. Finally, the data space X will be embedded in R n; we refer to real-valued functions on the space of distributions as functionals; we denote the functional q → KL(q p) as KL p; we adopt the Einstein summation convention, and omit the summation symbol when an index appears both as subscript and superscript on one side of an equation, e.g. Now we review the sampling algorithms considered in this work. They include diffusion-based MCMC, particle-based variational inference, and other stochastic interacting particle systems. Riemannian Langevin Dynamics Suppose our target distribution has density p(x) w.r.t. the Hausdorff measure of X. In a local chart U ⊂ X, let G: U → R m×m be the coordinate matrix of its Riemannian metric. Then the Riemannian Langevin dynamics corresponds to the following stochastic differential equation in the chart 2: where and (g ij) is the coordinate of the matrix G −1. It is known that the Riemannian Langevin dynamics is the gradient flow of the KL functional KL p (q):= KL(q p) in the 2-Wasserstein space P(X). Particle-based Samplers A range of samplers approximate the gradient flow of KL p in various spaces, using deterministic or stochastic interacting particle systems. 3 For instance, Stein variational gradient descent (SVGD;) simulates the gradient flow in the so-called H-Wasserstein space , which replaces the Riemannian structure in P(X) with the RKHS inner product. Birth-death accelerated Langevin dynamics is a stochastic interacting particle system that simulates to the gradient flow of KL p in the Wasserstein-Fisher-Rao space. Finally, the stochastic particle-optimization sampler (SPOS; ;) combines the dynamics of SVGD and Langevin dynamics; as we will show in Appendix E.2, SPOS also has a gradient flow structure. In this section we present additional discussions about our framework. In Section B.1 we discuss other objectives that can be derived from our framework; in Section B.2 we show 2. differs from the form in some literature (e.g.), as in our case, the density of the target measure is defined w.r.t. the Hausdorff measure of X, instead of the Lebesgue measure in. See (, eq ) or (, Section 1.5). 3. There are other particle-based samplers (b,a;) corresponding to accelerated gradient flows. However, as we will be interested in the initial velocity of the flow, they do not lead to new objectives in our framework. our control variate could be applied to CD-1 and denoising score matching; finally, while readers familiar with Riemannian Brownian motion may be concerned about the use of local coordinates in our Riemannian score matching approximation, we show in Section B.3 it does not lead to issues. As our derivation is independent of the distribution space of choice, we can derive approximations to other learning objectives using samplers other than Langevin dynamics, as reviewed in Section A.2. An example is Riemannian Langevin dynamics which we have discussed in the main text; another example is when we choose the sampler as SVGD. In this case, we will obtain an approximation to the kernelized Stein discrepancy, generalizing the derivation in . When the sampling algorithm is chosen as SPOS, the corresponding MVL objective will be an interpolation between KSD and the Fisher divergence. See Appendix E.3 for derivations. Finally, the use of birth-death accelerated Langevin dynamics leads to a novel learning objective. In terms of applications, our work focuses on learning neural energy-based models, and we find these objectives do not improve over score matching in this aspect. However, these derivations generalize previous discussions, and establish new connections between sampling algorithms and learning objectives. It is also possible that these objectives could be useful in other scenarios, such as learning kernel exponential family models , direct estimation of the score function , and improving the training of GANs or amortized variational inference methods . As a side product of our work, we show in this section that our variance analysis explains the pitfall of two well-known approximations to the score matching objective: CD-1 and denoising score matching . Both approximations become unbiased as a step-size hyper-parameter → 0, but could not match the performance of exact score matching in practice, as witnessed in;;. Our analysis leads to novel control variates for these approximators. As we will show in Section D.1, the variance-reduced versions of the approximations have comparable performance to the exact score matching objective. The first two terms inside the norm represent a noise corrupted sample, and ψ θ represents a "single-step denoising direction" . It is proved that the optimal ψ satisfies ψ = σ 2 ∇ logp, wherep is the density of the corrupted distribution . Consider the stochastic estimator of. We assume a batch size of 1, and denote the data sample as x. To keep notations consistent, denote = σ 2, ψ θ (x) = ∇ x E(x; θ). Then the stochastic estimator iŝ Denotex:= x + √ z. By Taylor expansion we havê As which is known as the Hutchinson's trick , But V ar(B) = O, so as → 0, the rescaled estimator −2L dsm becomes unbiased with infinite variance; and subtracting (B) from (A) in a finite-variance estimator. Proposed as an approximation to the maximum likelihood estimate, the K-step contrastive divergence (CD-K) learning rule updates the model parameter with where ν is the learning rate, and p K is obtained from p by running K steps of MCMC. does not define a valid objective, since p K also depends on θ; however, proved that when K = 1 and the sampler is the Langevin dynamics, recovers the gradient of the score matching objective. Using the same derivation as in the main text, we can see that as the step-size of the sampler approaches 0 (and ν is re-scaled appropriately), the gradient produced by CD-1 also suffers from infinite variance, and this can be fixed using the same control variate. However, practical utility of CD-1 is still hindered by the fact that it does not correspond to a valid learning objective; consequently, it is impossible to monitor the training process for CD-1, or introduce regularizations such as early stopping 4. Readers familiar with Riemannian Brownian motion will notice that we used local coordinates when deriving the MPF objective, and this is only valid until the particle exits the local chart. In this section, we show that this does not affect the validity of our method; specifically, we prove in Proposition 3 that the local coordinate representation lead to valid approximation to the MVL objective in the compact case. We also argue in Remark 4 that the use of local coordinate does not lead to numerical instability. 4. In practice, the term EpE − Ep K E is often used to tract the training process of CD-K. It is not a proper loss; we can see from that when K = 1 and → 0, EpE − Ep K E is significantly different from the proper score matching (MVL) loss, by a term of Remark 1 While a more general than Proposition 3 is likely attainable (e.g. by replacing compactness of X with quadratic growth of the energy), this is out of the scope of our work; for our purpose, it is sufficient to note that the proposition covers manifolds like S n, and the local coordinate issue will not exist in manifolds possessing a global chart, such as H n. Lemma 2 (Theorem 3.6.1 in ) For any manifold M, x ∈ M, and a normal neighborhood B of x, there exists constant C > 0 such that the first exit time τ from B, of the Riemannian Brownian motion starting from x, satisfies for any L ≥ 1. Proposition 3 Assume the data manifold X is compact, and for all θ, E(·; θ) is in C 1. Let L mvl_rld be defined as in, X t following the true Riemannian Langevin dynamics targeting i.e. recovers true WMVL objective. Proof By the tower property of conditional expectation, it suffices to prove the when P (X 0 = x) = 1 for some x. Choose a normal neighborhood B centered at x such that B is contained by our current chart, and has distance from the boundary of the chart bounded by some δ > 0. Let C,τ be defined as in Lemma 2. Recall the Riemannian LD is the sum of a drift and the Riemannian BM. Since X is compact and E is in C 1, the drift term in the SDE will have norm bounded by some finite C. Thus the first exit time of the Riemannian LD is greater than min(τ, δ/C) =: τ. Let X t follow the true Riemannian LD,X t = X t when t < τ, and be such that E(X t) = 0 afterwards. 5 , until τ,X t follows the local coordinate representation of Riemannian LD, thus on the event {≤ τ},X would correspond to y − in. As X is compact, the continuous energy function E is bounded by |E(·)| ≤ A for some finite A. Then for sufficiently small, In the above the first term converges to d dt E(E(X t)) t=0 as → 0, and Hence the proof is complete. 5. This is conceptually similar to the standard augmentation used in stochastic process texts; from a algorithmic perspective it can be implemented by modifying the algorithm so that in the very unlikely event when y − escapes the chart, we return 0 as the corresponding energy. We note that this is unnecessary for manifolds like S n, since the charts can be extended to R d and hence τ = ∞. Remark 4 It is argued that simulating diffusion-based MCMC in local coordinates leads to numeric instabilities (; a). We stress that in our setting of approximating MVL objectives, this is not the case. The reason is that we only need to do a single step of MCMC, with arbitrarily small step-size. Therefore, we could use different step-size for each sample, based on the magnitude of g and log q in their locations. We can also choose different local charts for each sample, which is justified by the proposition above. Our work concerns scalable learning algorithms for unnormalized models. This is a longstanding problem in literature, and some of the previous work is discussed in Section 1. Apart from those work, it is worth mentioning , which also designed a CD-like algorithm to approximate the kernelized Stein discrepancy; as we have discussed in Section B.1, in our framework there exists a similar algorithm, as well as a slight generalization when we replace SVGD with SPOS. Other notable work includes noise contrastive estimation (Gutmann and Hyvärinen, 2010) and Parzen score matching . However, to our knowledge, they have not been applied to complex unnormalized models such as those parameterized by DNNs, and a comparison would fall out of the scope of this work. Apart from the MVL formulation used in this work, there also exists other work on the connection between learning objectives of unnormalized model and infinitesimal actions of sampling dynamics. The minimum probability flow framework studies the slightly different objective lim →0 1 KL(p 0 p), where {p t} is the trajectory of the sampler. It also recovers score matching as a special instance; however, it does not lead to scalable learning algorithms for continuous-state unnormalized models as our method does; instead, its main application is in discrete-state models. Many of the MVL objectives we have derived are also instances of the Stein discrepancy . This interpretation is helpful in establishing theoretical properties, but it does not lead to scalable implementations of these objectives, that do not depend on higher-order derivatives. To demonstrate the proposed estimators have small bias and variance, we first evaluate them on low-dimensional synthetic data. We also verify the claim in Section B.2 that our control variate improves the performance of CD-1 and DSM. In this section, we evaluate our MVL approximation to the Euclidean score matching objective, as well as the variance-reduced DSM objective 6. 6. An experiment confirming the effectiveness of our control variate on CD-1 is presented in Appendix D.1.3. We evaluate the bias and variance of our estimators by comparing them to sliced score matching (SSM), an unbiased estimator for the score matching objective. We choose the data distribution p as the 2-D banana dataset from , and the model distribution q θ as an EBM trained on that dataset. We estimate the squared bias with a stochastic upper bound, using 5,000,000 samples. More specifically, denote the two methods as, respectively. We estimate the squared bias as, where Observe that this expectation of the estimate upper bounds the true squared bias following Cauchy's inequality; and the bias → 0 as K, M → 0. We choose K = 100, M = 50000 and plot the confidence interval. We also use these samples to estimate the variance of our estimator. For the model distribution q, we choose an EBM as stated in the main text. The energy of the model is parameterized as follows: we parameterize a d-dimensional vector ψ(x; θ) using a feed-forward network, then return x ψ(x; θ) as the energy function. This is inspired by the "score network" parameterization in ; we note that this choice has little influence on the synthetic experiments (and is merely chosen here for consistency), but leads to improved performance in the AE experiments. Finally, ψ(x; θ) is parameterized with 2 hidden layers and Swish activation , and each layer has 100 units. We apply spectral normalization to the intermediate layers. We train the EBM for 400 iterations with our approximation to the score matching objective, using a batch size of 200 and a learning rate of 4 × 10 −3. The choice of training objective is arbitrary; changing it to sliced score matching does not lead to any notable difference, as is expected from this experiment. The are shown in Figure 2, in which we plot the (squared) bias and variance for both estimators, with varying step-size. The bias plot is shown in the left. We can see that for both estimators, the bias is negligible at ≤ 10 −2. We further use a z-test to compare the mean of the two estimators (for = 6 × 10 −5) with the mean of SSM. The p value is 0.48 for our estimator and 0.19 for DSM, indicating there is no significant difference in either case. The variance of the estimators, with and without our control variate, are shown in Figure 2 right. As expected, the variance grows unbounded in absence of the control variate, and is approximately constant when it is added. From the scale of the variance, we can see that it is exactly this variance problem that causes the failure of the original DSM estimator. We now evaluate our approximation to the Riemannian score matching objective, by learning an unnormalized model. The data distribution is chosen as a mixture of two von Mises distributions on S 1: p(x) = 0.7p vM (x|, 2) + 0.3p vM (x|(0.5, −0.5), 3), where p vM is the von Mises density The energy function in the model is parameterized with a feed-forward network, using the same score-network-inspired parameterization as in the last experiment. The network uses tanh activation and has 2 hidden layers, each layer with 100 units. We generate 50,000 samples from p(x) for training. We use full batch training and train for 6,000 iterations, using a learning rate of 5 × 10 −4. The step-size hyperparameter in the MVL approximation is set to 10 −5. Results We plot the log densities of the ground truth distribution as well as the learned model in Figure 3. We can see the two functions matches closely, suggesting our method is suitable for density estimation on manifolds. To verify our control variate also solves the variance issue in CD-1, we train EBMs using CD-1 with varying step-size, with and without our control variate, and compare the score matching loss to EBMs trained with our method as well as sliced score matching. We use a separate experiment for CD-1 since it only estimates the gradient of the score matching loss. The score matching loss is calculated using SSM on training set, and averaged over 3 separate runs. We use the cosine dataset in ; the energy parameterization is the same as in Section D.1.1. The are shown in Figure 4. We can see that with the introduction of the control variate, CD-1 performs as well as other score matching methods. In all auto-encoder experiments, setup follows from whenever they applies. The only difference is that for score estimation, we parameterize the energy function, and use its gradient as the score estimate, as opposed to directly parameterizing the score function as done in . This modification makes our method applicable; essentially, it corrects the score estimation in so that it constitute a conservative field, which is a desirable property since score functions should be conservative. For this reason, we re-implement all experiments for Euclidean-prior auto-encoders to ensure a fair comparison. The are slightly worse than for the VAE experiment, but significantly better for WAE experiments. It should be noted that for the VAE experiment, our implicit hyperspherical VAE is still better than the implicit Euclidean VAE reported in . The (conditional) energy function in this experiment is parameterized using the score-net-inspired method described in Appendix D.1, with a feed-forward network. The network has 2 hidden layers, each with 256 hidden units. We use tanh activation for the network, and do not apply spectral normalization. When training the energy network, we add a L2 regularization term for the energy scale, with coefficient 10 −4. The coefficient is determined by grid search on {10 −3, 10 −4, 10 −5}, using AIS-estimated likelihood on a heldout set created from the training set. The step-size of the MVL approximation is set to 10 −3; we note that the performance is relatively insensitive w.r.t. the step-size inside the range of [10 −4, 10 −2], as suggested by the synthetic experiment. Outside this range, using a smaller step-size makes the worse, presumably due to floating point errors. For implicit models, the test likelihood is computed with annealed importance sampling, using 1,000 intermediate distributions, following . The transition operator in AIS is HMC for Euclidean-space latents, and Riemannian LD for hyperspherical latents. The training setup follows from : for all methods, we train for 100,000 iterations using RMSProp use a batch size of 128, and a learning rate of 10 −3. WAE Experiment on MNIST For our method, the energy network is parameterized in the same way as in the VAE experiments. When training the energy network, we use a step-size of 10 −4, and apply L2 regularization on the energy scale with coefficient 10 −5. For the WAE-GAN baseline, we use the Wasserstein GAN , and parameterize its critic as a feed-forward network with 2 hidden layers, each with 256 units. We experimented with both the standard parameterization (i.e. put a linear layer after the last hidden layer that outputs scalar) as well as the score-network-like parameterization used in our energy network, and found the to be similar. We use tanh activation, apply spectral normalization and a L2 regularization with coefficient 10 −4. The rest of the training setup follows from : training for 100,000 iterations using RMSProp, a batch size of 128, and a learning rate of 10 −3. The Lagrange multiplier hyperparameter λ in the WAE objective is fixed at 10. The energy network is parameterized in the same way as in . For our method, we use a step-size of 10 −4. For the GAN baseline, we use the standard parameterization for the critic, i.e. the final linear layer outputs a scalar; the previous layers follow the same architecture of ours. In both methods we use a L2 regularization with coefficient 10 −5. Following , we train for 100,000 iterations, using RMSProp and a learning rate of 10 −4. FID scores are calculated using the implementation in . Notations In this section, let the parameter space be d-dimensional, and define While in the main text, we identified the tangent space of P(X) as a subspace of L 2 (ρX → R d) for clarity, here we use the equivalent definition T ρ (P(X)):= {s ∈ L 2 (ρX → R): E ρ s = 0} following . The two definition are connected by the transform In this section, we give a formal derivation of SPOS as the gradient flow of the KL divergence functional, with respect to a new metric. Recall the SPOS sampler targeting distribution (with density) φ corresponds to the following density evolution: ∂ t ρ t = −∇ · (ρ t (x) (φ * ρt,φ (x) + α∇ log(φ/ρ)) νt(x) ) where α > 0 is a hyperparameter, and φ * ρt,φ (x):= E ρt(x) (S φ ⊗ k)(x, x):= E ρt(x) [(∇ x log φ(x))k(x, x) + ∇ x k(x, x)] is the SVGD update direction . Fix ρ, define the integral operator and define the tensor product operator K ⊗d which we will derive shortly at the end of this subsection. Subsequently, we have The rest of our derivation follows : consider the function space H ρ,α:= {(αId + K ⊗d ρt)[∇h]}, where h: X → R is any square integrable and differentiable function. It connects to the tangent space of P(X) if we consider s = −∇ · (ρp) for anỹ p ∈ H ρ,α. Define on H ρ,α the inner product f, g Hρ,α:= f, (αId + K It then determines a Riemannian metric on the function space. Forp ∈ H ρ,α and s = −∇·(ρp), by we have ν t,p Hρ,α = E ρt(x) ∇ log(φ/ρ t)(x),p(x) = − log φ ρ t (∇ · (pρ))dx = −(dKL φ)(s), i.e. with respect to the metric, SPOS is the gradient flow of the (negative) KL divergence functional. Derivation of let (λ i, ψ i) ∞ i=1 be its eigendecomposition (i.e. the Mercer representation). For j ∈ [d] let ψ i,j:= ψ i e j where {e j} d j=1 is the coordinate basis in R d, so {λ −1/2 i ψ i,j} becomes an orthonormal basis in H ⊗d. Now we calculate the coordinate of φ * ρ,φ in this basis. S φ is known to satisfy the Stein's identity for all g ∈ H. Thus, we can subtract E ρ S ρ (K ρ [ψ i,j]) from the right hand side of without changing its value, and it becomes As the equality holds for all i, k, we completed the derivation of. By and, the MVL objective derived from SPOS is Hρ,α = ∇ log(φ/ρ t), (αId + K ⊗d)∇ log(φ/ρ t) L 2 (ρX →R d). In the right hand side above, the first term in the summation is the Fisher divergence, and the second is the kernelized Stein discrepancy (b, Definition 3.2). We note that a similar for SVGD has been derived in , and our derivations connect to the observation that Langevin dynamics can be viewed as SVGD with a Dirac function kernel (thus SPOS also corresponds to SVGD with generalized-function-valued kernels). In this section we derive, when the latent-space distribution q φ (z) is defined on a pdimensional manifold embedded in some Euclidean space, and H[q φ (z)] is the relative entropy w.r.t. the Hausdorff measure. The derivation is largely similar to the Euclidean case, and we only include it here for completeness. holds because where (i) follows from Theorem 2.10.10 in , and (ii) follows from the same theorem as well as the fact that E q φ (z) [∇ φ log q φ (z)] = ∇ φ q φ (z)dz = 0. | We present a scalable approximation to a wide range of EBM objectives, and applications in implicit VAEs and WAEs | 1,279 | scitldr |
This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge. Experimental show that VCL outperforms state-of-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way. Continual learning (also called life-long learning and incremental learning) is a very general form of online learning in which data continuously arrive in a possibly non i.i.d. way, tasks may change over time (e.g. new classes may be discovered), and entirely new tasks can emerge BID43 BID47 BID39. What is more, continual learning systems must adapt to perform well on the entire set of tasks in an incremental way that avoids revisiting all previous data at each stage. This is a key problem in machine learning since real world tasks continually evolve over time (e.g. they suffer from covariate and dataset shift) and the size of datasets often prohibits frequent batch updating. Moreover, practitioners are often interested in solving a set of related tasks that benefit from being handled jointly in order to leverage multi-task transfer. Continual learning is also of interest to cognitive science, being an intrinsic human ability. The ubiquity of deep learning means that it is important to develop deep continual learning methods. However, it is challenging to strike a balance between adapting to recent data and retaining knowledge from old data. Too much plasticity leads to the infamous catastrophic forgetting problem BID34 BID36 BID13 and too much stability leads to an inability to adapt. Recently there has been a resurgence of interest in this area. One approach trains individual models on each task and then carries out a second stage of training to combine them BID28. A more elegant and more flexible approach maintains a single model and uses a single type of regularized training that prevents drastic changes in the parameters which have a large influence on prediction, but allows other parameters to change more freely BID29 BID26 BID50. The approach developed here follows this venerable work, but is arguably more principled, extensible and automatic. This paper is built on the observation that there already exists an extremely general framework for continual learning: Bayesian inference. Critically, Bayesian inference retains a distribution over model parameters that indicates the plausibility of any setting given the observed data. When new data arrive, we combine what previous data have told us about the model parameters (the previous posterior) with what the current data are telling us (the likelihood). Multiplying and renormalizing yields the new posterior, from which point we can recurse. Critically, the previous posterior constrains parameters that strongly influence prediction, preventing them from changing drastically, but it allows other parameters to change. The wrinkle is that exact Bayesian inference is typically intractable and so approximations are required. Fortunately, there is an extensive literature on approximate inference for neural networks. We merge online variational inference (VI) BID11 BID42 BID4 with Monte Carlo VI for neural networks BID3 to yield variational continual learning (VCL). In addition, we extend VCL to include a small episodic memory by combining VI with the coreset data summarization method BID0 BID19. We demonstrate that the framework is general, applicable to both deep discriminative models and deep generative models, and that it yields excellent performance. Consider a discriminative model that returns a probability distribution over an output y given an input x and parameters θ, that is p(y|θ, x). Below we consider the specific case of a softmax distribution returned by a neural network with weight and bias parameters, but we keep the development general for now. In the continual learning setting, the goal is to learn the parameters of the model from a set of sequentially arriving datasets {x DISPLAYFORM0 where, in principle, each might contain a single datum, N t = 1. Following a Bayesian approach, a prior distribution p(θ) is placed over θ. The posterior distribution after seeing T datasets is recovered by applying Bayes' rule: DISPLAYFORM1 Here the input dependence has been suppressed on the right hand side to lighten notation. We have used the shorthand D t = {y DISPLAYFORM2 . Importantly, a recursion has been identified whereby the posterior after seeing the T -th dataset is produced by taking the posterior after seeing the (T − 1)-th dataset, multiplying by the likelihood and renormalizing. In other words, online updating emerges naturally from Bayes' rule. In most cases the posterior distribution is intractable and approximation is required, even when forming the first posterior p(θ|D 1) ≈ q 1 (θ) = proj(p(θ)p(D 1 |θ)). Here q(θ) = proj(p * (θ)) denotes a projection operation that takes the intractable un-normalized distribution p * (θ) and returns a tractable normalized approximation q(θ). The field of approximate inference provides several choices for the projection operation including i) Laplace's approximation, ii) variational KL minimization, iii) moment matching, and iv) importance sampling. Having approximated the first posterior distribution, subsequent approximations can be produced recursively by combining the approximate posterior distribution with the likelihood and projecting, that is p(θ|D 1:T) ≈ q T (θ) = proj(q T −1 (θ)p(D T |θ)). In this way online updating is supported. This general approach leads, for the four projection operators previously identified, to i) Laplace propagation BID46, ii) online VI BID11 BID42 ) also known as streaming variational Bayes BID4, iii) assumed density filtering BID33 and iv) sequential Monte Carlo BID30. In this paper the online VI approach is used as it typically outperforms the other methods for complex models in the static setting ) and yet it has not been applied to continual learning of neural networks. Variational continual learning employs a projection operator defined through a KL divergence minimization over the set of allowed approximate posteriors Q, DISPLAYFORM0 The zeroth approximate distribution is defined to be the prior, q 0 (θ) = p(θ). Z t is the intractable normalizing constant of p * t (θ) = q t−1 (θ) p(D t |θ) and is not required to compute the optimum. VCL will perform exact Bayesian inference if the true posterior is a member of the approximating family, p(θ|D 1, D 2, . . ., D t) ∈ Q at every step t. Typically this will not be the case and we might worry that performing repeated approximations may accumulate errors causing the algorithm to forget old tasks, for example. Furthermore, the minimization at each step may also be approximate (e.g. due to employing an additional Monte Carlo approximation) and so additional information may be lost. In order to mitigate this potential problem, we extend VCL to include a small representative set of data from previously observed tasks that we call the coreset. The coreset is analogous to an episodic memory that retains key information (in our case, important training data points) from previous tasks which the algorithm can revisit in order to refresh its memory of them. The use of an episodic memory for continual learning has also been explored by BID31. Input: Prior p(θ). Output: Variational and predictive distributions at each step {qt(θ), p(y * |x *, D1:t)} T t=1. Initialize the coreset and variational approximation: C0 ← ∅,q0 ← p. for t = 1... T do Observe the next dataset Dt. Ct ← update the coreset using Ct−1 and Dt. Update the variational distribution for non-coreset data points: DISPLAYFORM0 Compute the final variational distribution (only used for prediction, and not propagation): DISPLAYFORM1 Perform prediction at test input x *: p(y * |x *, D1:t) = qt(θ)p(y * |θ, x *)dθ. end for Algorithm 1 describes coreset VCL. For each task, the new coreset C t is produced by selecting new data points from the current task and a selection from the old coreset C t−1. Any heuristic can be used to make these selections, e.g. K data points can be selected at random from D t and added to C t−1 to form an unbiased new coreset C t. Alternatively, the greedy K-center algorithm BID12 can be used to return K data points that are guaranteed to be spread throughout the input space. Next, a variational recursion is developed. Bayes' rule can be used to decompose the true posterior taking care to break out contributions from the coreset, DISPLAYFORM2 Here the variational distributionq t (θ) approximates the contribution to the posterior from the noncoreset data points. A recursion is identified by noting DISPLAYFORM3 Hence propagation is performed viaq t (θ) = proj(q t−1 (θ)p(D t ∪ C t−1 \ C t |θ)) with VCL employing the variational KL projection. A further projection step is needed before performing prediction q t (θ) = proj(q t (θ)p(C t |θ)). In this way the coreset is incorporated into the approximate posterior directly before prediction which helps mitigate any residual forgetting. From a more general perspective, coreset VCL is equivalent to a message-passing implementation of VI in which the coreset data point updates are scheduled after updating the other data. The VCL framework is general and can be applied to many discriminative probabilistic models. Here we apply it to continual learning of deep fully-connected neural network classifiers. Before turning to the application of VCL, we first consider the architecture of neural networks suitable for performing continual learning. In simple instances of discriminative continual learning, where data are arriving in an i.i.d. way or where only the input distribution p(x 1:T) changes over time, a standard single-head discriminative neural network suffices. In many cases the tasks, although related, might involve different output variables. Standard practice in multi-task learning BID1 uses networks that share parameters close to the inputs but with separate heads for each output, hence multi-head networks. Graphical models depicting the network architecture for deep discriminative and deep generative models are shown in FIG0. Recent work has explored more advanced structures for continual learning BID40 and multi-task learning more generally BID48 BID37. These architectural advances are complementary to the new learning schemes developed here and a synthesis of the two would be potentially more powerful. Moreover, a general solution to continual learning would perform automatic continual model building adding new bespoke structure to the existing model as new tasks are encountered. Although this is a very interesting research direction, here we make the simplifying assumption that the model structure is known a priori. VCL requires specification of q(θ) where θ in the current case is a D dimensional vector formed by stacking the network's biases and weights. For simplicity we use a Gaussian mean-field approximate posterior DISPLAYFORM0 Taking the most general case of a multi-head network, before task k is encountered the posterior distribution over the associated head parameters will remain at the prior and so q(θ DISPLAYFORM1 . This is convenient as it means the variational approximation can be grown incrementally, starting from the prior, as each task emerges. Moreover, only tasks present in the current dataset D t need to have their posterior distributions over head parameters updated. The shared parameters, on the other hand, will be constantly updated. Training the network using the VFE approach in eq. is equivalent to maximizing the negative online variational free energy or the variational lower bound to the online marginal likelihood DISPLAYFORM2 with respect to the variational parameters DISPLAYFORM3. Whilst the KL-divergence KL(q t (θ)||q t−1 (θ)) can be computed in closed-form, the expected log-likelihood requires further approximation. Here we take the usual approach of employing simple Monte Carlo and use the local reparameterization trick to compute the gradients BID41 BID23. At the first time step, the prior distribution, and therefore q 0 (θ) is chosen to be a multivariate Gaussian distribution (see e.g. BID15 BID3 . Deep generative models (DGMs) have garnered much recent attention. By passing a simple noise variable (e.g. Gaussian noise) through a deep neural network, these models have been shown to be able to generate realistic images, sounds and videos sequences BID8 BID25 BID49. Standard approaches for learning DGMs have focused on batch learning, i.e. the observed instances are assumed to be i.i.d. and are all available at the same time. In this section we extend the VCL framework to encompass variational auto-encoders (VAEs) BID23 BID38, a form of DGM. The approach could be extended to generative adversarial networks (GANs) BID14 for which continual learning is an open problem (see BID44 for an initial attempt).Consider a model p(x|z, θ)p(z), for observed data x and latent variables z. The prior over latent variables p(z) is typically Gaussian, and the distributional parameters of p(x|z, θ) are defined by a deep neural network. For example, if Bernoulli likelihood is used, then p(x|z, θ) = Bern(x; f θ (z)), where f θ denotes the deep neural network transform and θ collects all the weight matrices and bias vectors. In the batch setting, given a dataset DISPLAYFORM0, the standard VAE approach learns the parameters θ by approximate maximum likelihood estimation (MLE). This proceeds by maximizing the variational lower bound with respect to θ and φ: DISPLAYFORM1 where φ are the variational parameters of the approximate posterior or "encoder".The approximate MLE approach is unsuitable for the continual learning setting as it does not return parameter uncertainty estimates that are critical for weighting the information learned from old data. So, instead the VCL approach will approximate the full posterior distribution over parameters, q t (θ) ≈ p(θ|D 1:t), after observing the t-th dataset. Specifically, the approximate posterior q t is obtained by maximizing the full variational lower bound with respect to q t and φ: DISPLAYFORM2 where the encoder network q φ (z DISPLAYFORM3 is parameterized by φ which is task-specific. It is likely to be beneficial to share (parts of) these encoder networks, but this is not investigated in this paper. As was the case for multi-head discriminative models, we can split the generative model into shared and task-specific parts. There are two options: (i) the generative models share across tasks the network that generates observations x from the intermediate-level representations h, but have private "head networks" for generating h from the latent variables z (see FIG0), and (ii) the other way around. Architecture (i) is arguably more appropriate when data are composed of a common set of structural primitives (such as strokes in handwritten digits) that are selected by high level variables (character identities). Moreover, initial experiments on architecture (ii) indicated that information about the current task tended to be encoded entirely in the task-specific lower-level network negating multi-task transfer. For these reasons, we focus on architecture (i) in the experiments. Continual Learning for Deep Discriminative Models: Many neural network continual learning approaches employ regularized maximum likelihood estimation, optimizing objectives of the form: DISPLAYFORM0. Here the regularization biases the new parameter estimates towards those estimated at the previous step θ t−1. λ t is a user-selected hyper-parameter that controls the overall contribution from previous data and Σ t−1 is a matrix (normally diagonal in form) that encodes the relative strength of the regularization on each element of θ. We now discuss specific instances of this scheme:• Maximum-likelihood estimation and MAP estimation: maximum likelihood estimation is recovered when there is no regularization (λ t = 0). More generally, the regularization term can be interpreted as a Gaussian prior, q(θ|D 1:t−1) = N (θ; θ t−1, Σ t−1 /λ t). The optimization returns the maximum a posteriori estimate of the parameters, but this does not directly provide Σ t for the next stage. A simple fix is to set Σ t = I and use cross-validation to find λ t, but this approximation is often coarse and leads to catastrophic forgetting BID13 BID26 ).• Laplace Propagation (LP) BID46: applying Laplace's approximation at each step leads to a recursion for Σ −1 t, which is initialized using the covariance of the Gaussian prior, Σ −1 DISPLAYFORM1 To avoid computing the full Hessian of the likelihood, diagonal Laplace propagation retains only the diagonal terms of Σ −1 t.Published as a conference paper at ICLR 2018• Elastic Weight Consolidation (EWC) BID26 builds on diagonal Laplace propagation by approximating the average Hessian of the likelihoods using well-known identities for the Fisher information: DISPLAYFORM2 EWC also modifies the Laplace regularization, DISPLAYFORM3, introducing hyper-parameters, removing the prior and regularizing to intermediate parameter estimates, rather than just those derived from the last task, DISPLAYFORM4 These changes may be unnecessary BID20 BID21 and require storing θ 1:t−1, but may slightly improve performance (see BID27 and our experiments).• Synaptic Intelligence (SI) BID50: SI computes Σ −1 t using a measure of the importance of each parameter to each task. Practically, this is achieved by comparing the changing rate of the gradients of the objective and the changing rate of the parameters. VCL differs from the above methods in several ways. First, unlike MAP, EWC and SI, it does not have free parameters that need to be tuned on a validation set. This can be especially awkward in the online setting. Second, although the KL regularization penalizes the mean of the approximate posterior through a quadratic cost, a full distribution is retained and averaged over at training time and at test time. Third, VI is generally thought to return better uncertainty estimates than approaches like Laplace's method and MAP estimation, and we have argued this is critical for continual learning. There is a long history of research on approximate Bayesian training of neural networks, including extended Kalman filtering BID45 ), Laplace's approximation BID32, variational inference BID18 BID2 BID15 BID3 BID10, sequential Monte Carlo BID9, expectation propagation (EP) BID16, and approximate power EP. These approaches have focused on batch learning, but the framework described in section 2 enables them to be applied to continual learning. On the other hand, online variational inference has been previously explored BID11 BID4 BID6, but not for neural networks or in the context of sets of related complex tasks. Continual Learning for Deep Generative Models: A naïve continual learning approach for deep generative models would directly apply the VAE algorithm to the new dataset D t with the model parameters initialized at the previous parameter values θ t−1. The experiments show that this approach leads to catastrophic forgetting, in the sense that the generator can only generate instances that are similar to the data points from the most recently observed task. Alternatively, EWC regularization can be added to the VAE objective: DISPLAYFORM5 However computing Φ t requires the gradient of the intractable marginal likelihood ∇ θ log p(x|θ). Instead, we can approximate the marginal likelihood by the variational lower bound, i.e. DISPLAYFORM6 Similar variational lower-bound approximations apply when computing the Hessian matrices for LP and Σ −1 t for SI. An importance sampling estimate could also be used BID7. The experiments evaluate the performance and flexibility of VCL through three discriminative tasks and two generative tasks. Standard continual learning benchmarks are used where possible. Comparisons are made to EWC, diagonal LP and SI that employ tuned hyper-parameters λ whereas VCL's objective is hyper-parameter free. More details of the experiment settings and an additional experiment are available in the appendix. We consider the following three continual learning experiments for deep discriminative models. Permuted MNIST: This is a popular continual learning benchmark BID13 BID26 BID50. The dataset received at each time step D t consists of labeled MNIST images whose pixels have undergone a fixed random permutation. We compare VCL to EWC, SI, and diagonal LP. For all algorithms, we use fully connected single-head networks with two hidden layers, where each layer contains 100 hidden units with ReLU activations. We evaluate three versions of VCL: VCL with no coreset, VCL with a random coreset, and VCL with a coreset selected by the K-center method. For the coresets, we select 200 data points from each task. FIG1 compares the average test set accuracy on all observed tasks. From this figure, VCL outperforms EWC, SI, and LP by large margins, even though they benefited from an extensive hyperparameter search for λ. Diagonal LP performs slightly worse than EWC both when λ = 1 and when the values of λ are tuned. After 10 tasks, VCL achieves 90% average accuracy, while EWC, SI, and LP only achieve 84%, 86%, and 82% respectively. The also show that the coresets perform poorly by themselves, but combining them with VCL leads to a modest improvement: both random coresets and K-center coresets achieve 93% accuracy. We also investigate the effect of the coreset size. In FIG3, we plot the average test set accuracy of VCL with random coresets of different sizes. At the coreset size of 5,000 examples per task, VCL achieves 95.5% accuracy after 10 tasks, which is significantly better than the 90% of vanilla VCL. Performance improves with the coreset size although it asymptotes for large coresets as expected: if a sufficiently large coreset is employed, it will be fully representative of the task and thus training on the coreset alone can achieve a good performance. However, the experiments show that the combination of VCL and coresets is advantageous even for large coresets. Split MNIST: This experiment was used by BID50 to assess the SI method. Five binary classification tasks from the MNIST dataset arrive in sequence: 0/1, 2/3, 4/5, 6/7, and 8/9. We use fully connected multi-head networks with two hidden layers comprising 256 hidden units with ReLU activations. We compare VCL (with and without coresets) to EWC, SI, and diagonal LP. For the coresets, 40 data points from each task are selected through random sampling or the K-center method. FIG4 compares the test set accuracy on individual tasks (averaged over 10 runs) as well as the accumulated accuracy averaged over tasks (right). As an upper bound on the algorithms' performance, we compare to batch VI trained on the full dataset. From this figure, VCL significantly outperforms EWC and LP although it is slightly worse than SI. Again, unlike VCL, EWC and SI benefited from a hyper-parameter search for λ, but a value close to 1 performs well in both cases. After 5 tasks, VCL achieves 97.0% average accuracy on all tasks, while EWC, SI, and LP attain 63.1%, 98.9%, and 61.2% respectively. Adding the coreset improves VCL to around 98.4% accuracy. This experiment is similar to the previous one, but it uses the more challenging notMNIST dataset and deeper networks. The notMNIST dataset 2 here contains 400,000 images of the characters from A to J with different font styles. We consider five binary classification tasks: A/F, B/G, C/H, D/I, and E/J using deeper networks comprising four hidden layers of 150 hidden units with ReLU activations. The other settings are kept the same as the previous experiment. VCL is competitive with SI and significantly outperforms EWC and LP (see FIG5), although the SI and EWC baselines benefited from a hyper-parameter search. VCL achieves 92.0% average accuracy after 5 tasks, while EWC, SI, and LP attain 71%, 94%, and 63% respectively. Adding the random coreset improves the performance of VCL to 96% accuracy. We consider two continual learning experiments for deep generative models: MNIST digit generation and notMNIST (small) character generation. In both cases, ten datasets are received in sequence. For MNIST, the first dataset comprises exclusively of images of the digit zero, the second dataset ones and so on. For notMNIST, the datasets contain the characters A to J in sequence. The generative model consists of shared and task-specific components, each represented by a one hidden layer neural network with 500 hidden units (see FIG0). The dimensionality of the latent variable z and the intermediate representation h are 50 and 500, respectively. We use task-specific encoders that are neural networks with symmetric architectures to the generator. We compare VCL to naïve online learning using the standard VAE objective, LP, EWC and SI (with hyper-parameters λ = 1, 10, 100). For full details of the experimental settings see Appendix E. Samples from the generative models attained at different time steps are shown in fig. 6. The naïve online learning method fails catastrophically and so numerical are omitted. LP, EWC, SI and VCL remember previous tasks, with SI and VCL achieving the best visual quality on both datasets. The algorithms are quantitatively evaluated using two metrics in FIG7: an importance sampling estimate of the test log-likelihood (test-LL) using 5, 000 samples and a measure of quality we term "classifier uncertainty". For the latter, we train a discriminative classifier for the digits/alphabets to achieve high accuracy. The quality of generated samples can then be assessed by the KL-divergence from the one-hot vector indicating the task, to the output classification probability vector computed on the generated images. A well-trained generator will produce images that are correctly classified in high confidence ing in zero KL. We only report the best performance for LP, EWC and SI.We observe that LP and EWC perform similarly, most likely due to the fact that both LP and EWC use the same Σ t matrices. EWC achieves significantly worse performance than SI. VCL is on par with or slightly better than SI. VCL has a superior long-term memory of previous tasks which leads to better overall performance on both metrics even though it does not have tuned hyper-parameters in its objective function. For MNIST, the performance of LP and EWC deteriorate markedly when moving from task "digit 0" to "digit 1" possibly due to the large task differences. Also for all experimental settings we tried, SI fails to produce high test-LL after task "digit 7". Future work will investigate continual learning on a sequence of tasks that follows "adversarial ordering", i.e. the ordering that makes the next task maximally different from the current task. Approximate Bayesian inference provides a natural framework for continual learning. Variational Continual Learning (VCL), developed in this paper, is an approach in this vein that extends online variational inference to handle more general continual learning tasks and complex neural network models. VCL can be enhanced by including a small episodic memory that leverages coreset algorithms from statistics and connects to message-scheduling in variational message passing. We demonstrated how the VCL framework can be applied to both discriminative and generative models. Experimental showed state-of-the-art performance when compared to previous continual learning approaches, even though VCL has no free parameters in its objective function. Future work should explore alternative approximate inference methods using the same framework and also develop more sophisticated episodic memories. Finally, we note that VCL is ideally suited for efficient model refinement in sequential decision making problems, such as reinforcement learning and active learning. DISPLAYFORM0 Figure 6: Generated images from each of the generators after training. Each of the columns shows the images generated from a specific task's generator, and each of the lines shows the generations from generators of all trained tasks. Clearly the naive approach suffers from catastrophic forgetting, while other approaches successfully remember previous tasks. In this experiment, we use fully connected single-head networks with two hidden layers, where each layer contains 100 hidden units with ReLU activations. The metric used for comparison is the test set accuracy on all observed tasks. We train all the models using the Adam optimizer BID22 with learning rate 10 −3 since we found that it works best for all models. All the VCL algorithms are trained with batch size 256 and 100 epochs. For all the algorithms with coresets, we choose 200 examples from each task to include into the coresets. The algorithms that use only the coresets are trained using the VFE method with batch size equal to the coreset size and 100 epochs. We use the prior N (0, I) and initialize our optimizer for the first task at the mean of the maximum likelihood model and a very small initial variance (10 −6).We compare the performance of SI with hyper-parameters λ = 0.01, 0.1, 0.5, 1, 2 and select the best one (λ = 0.5) as our baseline (see fig. 8). Following BID50, we train these models with batch size 256 and 20 epochs. We also compare the performance of EWC with λ = 1, 10, 10 2, 10 3, 10 4 and select the best value λ = 10 2 as our baseline (see fig. 9). The models are trained without dropout and with batch size 200 and 20 epochs. We approximate the Fisher information matrices in EWC using 600 random samples drawn from the current dataset. For diagonal LP, we compare the performance of λ = 0.01, 0.1, 1, 10, 100 and use the best value λ = 0.1 as our baseline (see FIG0). The models are also trained with prior N (0, I), batch size 200, and 20 epochs. The Hessians of LP are approximated using the Fisher information matrices with 200 samples. In this experiment, we use fully connected multi-head networks with two hidden layers, each of which contains 256 hidden units with ReLU activations. At each time step, we compare the test set accuracy of the current model on all observed tasks separately. We also plot the average accuracy over all tasks in the last column of FIG4. All the for this experiment are the averages over 10 runs of the algorithms with different random seeds. We use the Adam optimizer with learning rate 10 −3 for all models. All the VCL algorithms are trained with batch size equal to the size of the training set and 120 epochs. We use the prior N (0, I) and initialize our optimizer for the first task at the mean of the maximum likelihood model and a very small initial variance (10 −6). For the coresets, we choose 40 examples from each task to include into the coresets. In this experiment, the final approximate posterior used for prediction in eq. is computed for each task separately using the coreset points corresponding to the task. The algorithms that use only the coresets are trained using the VFE method with batch size equal to the coreset size and 120 epochs. We compare the performance of SI with λ = 0.01, 0.1, 1, 2, 3 and use the best value λ = 1 as our baseline (see FIG0). We also compare EWC with both single-head and multi-head models and λ = 1, 10, 10 2, 10 3, 10 4 (see FIG0). We approximate the Fisher information matrices using 200 random samples drawn from the current dataset. The figure shows that the multi-head models work better than the single-head models for EWC, and the performance is insensitive to the choice of λ. Thus, we use the multi-head model with λ = 1 as the EWC baseline for our experiment. For diagonal LP, we also use the multi-head model with λ = 1, prior N (0, I), and approximate the Hessians using the Fisher information matrices with 200 samples. The settings for this experiment are the same as those in the Split MNIST experiment above, except that we use deeper networks with 4 hidden layers, each of which contains 150 hidden units. FIG0 show the performance of SI and EWC with different hyper-parameter values respectively. In the experiment, we choose λ = 10 4 for multi-head EWC, λ = 1 for multi-head LP, and λ = 0.1 for SI. Here we consider a small experiment on a toy 2D dataset to understand some of the properties of EWC with λ = 1 and VCL. The experiment comprises two sequential binary classification tasks. The first task contains two classes generated from two Gaussian distributions. The data points (with green and black classes) for this task are shown in the first column of FIG0. The second task contains two classes also generated from two Gaussian distributions. The green class for this task has the same input distribution as the first task, while the input distribution for the black class is different. The data points for this task are shown in the third columns of FIG0. Each task contains 200 data points with 100 data points in each class. We compare the multi-head models trained by VCL and EWC on these two tasks. In this experiment, we use fully connected networks with one hidden layer containing 20 hidden units with ReLU activations. The first column of FIG0 shows the contours of the prediction probabilities after observing the first task, and both methods perform reasonably well for this task. However, after observing the second task, the EWC method fails to learn the classifiers for both tasks, while the VCL method are still able to learn good classifiers for them. In the experiments on Deep Generative Models, the learning rates and numbers of optimization epochs are tuned on separate training of each tasks. This gives a learning rate of 10 −4 and the number of epochs 200 for MNIST (except for SI) and 400 for notMNIST. For SI we optimize for 400 epochs on MNIST. For the VCL approach, the parameters of q t (θ) are initialized to have the same mean as q t−1 (θ) and the log standard deviation is set to 10 −6.The generative model consists of shared and task-specific components, each represented by a one hidden layer neural network with 500 hidden units (see FIG0). The dimensionality of the latent variable z and the intermediate representation h are 50 and 500, respectively. We use task-specific encoders that are neural networks with symmetric architectures to the generator. In many probabilistic models with conjugate priors, the exact posterior of the parameters/latent variables can be obtained. For example, a Bayesian linear regression model with a Gaussian prior over the parameters and a Gaussian observation model has a Gaussian posterior. If we insist on using a diagonal Gaussian approximation to this posterior and use either the variational free energy method or the Laplace's approximation, we will end up at the same solution -a Gaussian distribution with the same mean as that of the exact posterior and the diagonal precisions being the diagonal precisions of the exact posterior. Consequently, the online variational Gaussian approximation will give the same to that given by the online Laplace's approximation. However, when a diagonal Gaussian approximation is used, the batch and sequential solutions are different. In the following, we will explicitly detail the sequential variational updates for a Bayesian linear regression model to associate random binary patterns to binary outcomes BID26, and show its relationship to the online Laplace's approximation and the EWC approach of BID26. The task consists of associating a random D-dimensional binary vector x t to a random binary output y t by learning a weight vector W. Note that the possible values of the features and outputs are 1 and −1, and not 0 and 1. We also assume that the model sees only one input-output pair {x t, y t} at the t-th time step and the previous approximate posterior q t−1 (W) = By further assuming that σ y = 1 and x t 2 = 1, the equations above become: DISPLAYFORM0 DISPLAYFORM1 wherex t = y t x t. When v −1 0,d = 0, i.e. the prior is ignored, the update for the mean above is exactly equation S4 in the supplementary material of BID26. Therefore, in this case, the memory of the network trained by online variational inference is identical to that of the online Laplace's method, provided in BID26. These methods differ, in practice, when the prior is not ignored or when the parameter regularization constraints are accumulated as discussed in the main text. This equivalence also does not hold in the general case, as discussed by BID35 where the Gaussian variational approximation can be interpreted as an averaged and smoothed Laplace's approximation. | This paper develops a principled method for continual learning in deep models. | 1,280 | scitldr |
In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner. Model-free deep reinforcement learning (RL) algorithms have been developed to solve difficult control and decision-making tasks by self-exploration (; ;). While various kinds of fully observable environments have been well investigated, recently, partially observable (PO) environments (; ; ;) have commanded greater attention, since real-world applications often need to tackle incomplete information and a non-trivial solution is highly desirable. There are many types of PO tasks; however, those that can be solved by taking the history of observations into account are more common. These tasks are often encountered in real life, such as videos games that require memorization of previous events and robotic control using real-time images as input . While humans are good at solving these tasks by extracting crucial information from the past observations, deep RL agents often have difficulty acquiring satisfactory policy and achieving good data efficiency, compared to those in fully observable tasks . For solving such PO tasks, several categories of methods have been proposed. One simple, straightforward solution is to include a history of raw observations in the current "observation" . Unfortunately, this method can be impractical when decision-making requires a long-term memory because dimension of observation become unacceptably large if a long history is included. Another category is based on model-free RL methods with recurrent neural networks (RNN) as function approximators (; 1991; ; ;), which is usually more tractable to implement. In this case, RNNs need to tackle two problems simultaneously : learning representation (encoded by hidden states of the RNN) of the underlying states of the environment from the state-transition data, and learning to maximize returns using the learned representation. As most RL algorithms use a bootstrapping strategy to learn the expected return and to improve the policy , it is challenging to train the RNN stably and efficiently, since RNNs are relatively more difficult to train than feedforward neural networks. The third category considers learning a model of the environment and estimating a belief state, extracted from a sequence of state-transitions (; ;). The belief state is an agent-estimated variable encoding underlying states of the environment that determines state-transitions and rewards. Perfectly-estimated belief states can thus be taken as "observations" of an RL agent that contains complete information for solving the task. Therefore, solving a PO task is segregated into a representation learning problem and a fully observable RL problem. Since fully observable RL problems have been well explored by the RL community, the critical challenge here is how to estimate the belief state. In this study, we developed a variational recurrent model (VRM) that models sequential observations and rewards using a latent stochastic variable. The VRM is an extension of the variational recurrent neural network (VRNN) model that takes actions into account. Our approach falls into the third category by taking the internal states of the VRM together with raw observations as the belief state. We then propose an algorithm to solve PO tasks by training the VRM and a feed-forward RL controller network, respectively. The algorithm can be applied in an end-to-end manner, without fine tuning of a hyperparameters. We then experimentally evaluated the proposed algorithm in various PO versions of robotic control tasks. The agents showed substantial policy improvement in all tasks, and in some tasks the algorithm performed essentially as in fully observable cases. In particular, our algorithm demonstrates greater performance compared to alternative approaches in environments where only velocity information is observable or in which long-term memorization is needed. Typical model-based RL approaches utilize learned models for dreaming, i.e. generating statetransition data for training the agent (; ;) or for planning of future state-transitions (; ;). This usually requires a well-designed and finely tuned model so that its predictions are accurate and robust. In our case, we do not use VRMs for dreaming and planning, but for auto-encoding state-transitions. Actually, PO tasks can be solved without requiring VRMs to predict accurately (see Appendix E). This distinguishes our algorithm from typical model-based RL methods. The work our method most closely resembles is known as stochastic latent actor-critic , in which a latent variable model is trained and uses the latent state as the belief state for the critic. SLAC showed promising using pixels-based robotic control tasks, in which velocity information needs to be inferred from third-person images of the robot. Here we consider more general PO environments in which the reward may depend on a long history of inputs, e.g., in a snooker game one has to remember which ball was potted previously. The actor network of SLAC did not take advantage of the latent variable, but instead used some steps of raw observations as input, which creates problems in achieving long-term memorization of reward-related state-transitions. Furthermore, SLAC did not include raw observations in the input of the critic, which may complicate training the critic before the model converges. The scope of problems we study can be formulated into a framework known as partially observable Markov decision processes (POMDP) . POMDPs are used to describe decision or control problems in which a part of underlying states of the environment, which determine state-transitions and rewards, cannot be directly observed by an agent. A POMDP is usually defined as a 7-tuple (S, A, T, R, X, O, γ), in which S is a set of states, A is a set of actions, and T: S × A → p(S) is the state-transition probability function that determines the distribution of the next state given current state and action. The reward function R: S × A → R decides the reward during a state-transition, which can also be probabilistic. Moreover, X is a set of observations, and observations are determined by the observation probability function O: S × A → p(X). By defining a POMDP, the goal is to maximize expected discounted future rewards t γ t r t by learning a good strategy to select actions (policy function). Our algorithm was designed for general POMDP problems by learning the representation of underlying states s t ∈ S via modeling observation-transitions and reward functions. However, it is expected to work in PO tasks in which s t or p(s t) can be (at least partially) estimated from the history of observations x 1:t. To model general state-transitions that can be stochastic and complicated, we employ a modified version of the VRNN . The VRNN was developed as a recurrent version of the variational auto-encoder , composed of a variational generation model and a variational inference model. It is a recurrent latent variable model that can learn to encode and predict complicated sequential observations x t with a stochastic latent variable z t. The generation model predicts future observations given the its internal states, where f s are parameterized mappings, such as feed-forward neural networks, and d t is the state variable of the RNN, which is recurrently updated by The inference model approximates the latent variable z t given x t and d t. For sequential data that contain T time steps, learning is conducted by maximizing the evidence lower bound ELBO, like that in a VEA , where where p and q are parameterized PDFs of z t by the generative model and the inference model, respectively. In a POMDP, a VRNN can be used to model the environment and to represent underlying states in its state variable d t. Thus an RL agent can benefit from a well-learned VRNN model since d t provides additional information about the environment beyond the current raw observation x t. Soft actor-critic (SAC) is a state-of-the-art model-free RL that uses experience replay for dynamic programming, which been tested on various robotic control tasks and that shows promising performance (a; b). A SAC agent learns to maximize reinforcement returns as well as entropy of its policy, so as to obtain more rewards while keeping actions sufficiently stochastic. A typical SAC implementation can be described as follows. The state value function V (s), the state-action value function Q(s, a) and the policy function π(a|s) are parameterized by neural networks, indicated by ψ, λ, η, respectively. Also, an entropy coefficient factor (also known as the temperature parameter), denoted by α, is learned to control the degree of stochasticity of the policy. The parameters are learned by simultaneously minimizing the following loss functions. where B is the replay buffer from which s t is sampled, and H tar is the target entropy. To compute the gradient of J π (η) (Equation. 7), the reparameterization trick is used on action, indicated by a η (s t). Reparameterization of action is not required in minimizing J(α) (Equation. 8) since log π η (a|s t) does not depends on α. SAC was originally developed for fully observable environments; thus, the raw observation at the current step x t was used as network input. In this work, we apply SAC in PO tasks by including the state variable d t of the VRNN in the input of function approximators of both the actor and the critic. An overall diagram of the proposed algorithm is summarized in Fig. 1(a), while a more detailed computational graph is plotted in Fig. 2. We extend the original VRNN model to the proposed VRM model by adding action feedback, i.e., actions taken by the agent are used in the inference model and the generative model. Also, since we are modeling state-transition and reward functions, we include the reward r t−1 in the current raw observation x t for convenience. Thus, we have the inference model (Fig. 1(c) ), denoted by φ, as The generative model (Fig. 1(b) ), denoted by θ here, is For building recurrent connections, the choice of RNN types is not limited. In our study, the longshort term memory (LSTM) is used since it works well in general cases. So we have As in training a VRNN, the VRM is trained by maximizing an evidence lower bound (Fig. 1(c) ) In practice, the first term E q φ [log p θ (x t |z 1:t, x 1:t−1)] can be obtained by unrolling the RNN using the inference model (Fig. 1(c) ) with sampled sequences of x t. Since q φ and p θ are parameterized Gaussian distributions, the KL-divergence term can be analytically expressed as For computation efficiency in experience replay, we train a VRM by sampling minibatchs of truncated sequences of fixed length, instead of whole episodes. Details are found in Appendix A.1. Since training of a VRM is segregated from training of the RL controllers, there are several strategies for conducting them in parallel. For the RL controller, we adopted a smooth update strategy as in Haarnoja et al. (2018a), i.e., performing one time of experience replay every n steps. To train the VRM, one can also conduct smooth update. However, in that case, RL suffers from instability of the representation of underlying states in the VRM before it converges. Also, stochasticity of RNN state variables d can be meaninglessly high at early stage of training, which may create problems in RL. Another strategy is to pre-train the VRM for abundant epochs only before RL starts, which unfortunately, can fail if novel observations from the environment appear after some degree of policy improvement. Moreover, if pre-training and smooth update are both applied to the VRM, RL may suffer from a large representation shift of the belief state. To resolve this conflict, we propose using two VRMs, which we call the first-impression model and the keep-learning model, respectively. As the names suggest, we pre-train the first-impression model and stop updating it when RL controllers and the keep-learning model start smooth updates. Then we take state variables from both VRMs, together with raw observations, as input for the RL controller. We found that this method yields better overall performance than using a single VRM (Appendix C). Initialize the first-impression VRM M f and the keep-learning VRM M k, the RL controller C, and the replay buffer D, global step t ← 0. repeat Initialize an episode, assign M with zero initial states. while episode not terminated do Sample an action a t from π(a t |d t, x t) and execute a t, t ← t + 1. Compute 1-step forward of both VRMs using inference models. if t == step start RL then For N epochs, sample a minibatch of samples from B to update M f (Eq. 11). end if if t > step start RL and mod(t, train interval KLV RM) == 0 then Sample a minibatch of samples from B to update M k (Eq. 5, 6, 7, 8). end if if t > step start RL and mod(t, train interval RL) == 0 then Sample a minibatch of samples from B to update R (Eq. 11). end if end while until training stopped As shown in Fig. 1(a), we use multi-layer perceptrons (MLP) as function approximators for V, Q, respectively. Inputs for the Q t network are (x t, d t, a t), and V t is mapped from (x t, d t). Following Haarnoja et al. (2018a), we use two Q networks λ 1 and λ 2 and compute Q = min(Q λ1, Q λ2) in Eq. 5 and 7 for better performance and stability. Furthermore, we also used a target value network for computing V in Eq. 6 as in Haarnoja et al. (2018a). The policy function π η follows a parameterized where µ η and σ η are also MLPs. In the execution phase (Fig. 1(b) ), observation and reward x t = (X t, r t−1) are received as VRM inputs to compute internal states d t using inference models. Then, the agent selects an action, sampled from π η (a t |d t, x t), to interact with the environment. To train RL networks, we first sample sequences of steps from the replay buffer as minibatches; thus, d t can be computed by the inference models using recorded observationsx t and actionsā t (See Appendix A.1.2). Then RL networks are updated by minimizing the loss functions with gradient descent. Gradients stop at d t so that training of RL networks does not involve updating VRMs. To empirically evaluate our algorithm, we performed experiments in a range of (partially observable) continuous control tasks and compared it to the following alternative algorithms. The overall procedure is summarized in Algorithm 1. For the RL controllers, we adopted hyperparameters from the original SAC implementation (b). Both the keep-learning and first-impression VRMs were trained using learning rate 0.0008. We pre-trained the first-impression VRM for 5,000 epochs, and updated the keep-learning VRM every 5 steps. Batches of size 4, each containing a sequence of 64 steps, were used for training both the VRMs and the RL controllers. All tasks used the same hyperparameters (Appendix A.1). • SAC-MLP: The vanilla soft actor-critic implementation (a; b), in which each function is approximated by a 2-layer MLP taking raw observations as input. • SAC-LSTM: Soft actor-critic with recurrent networks as function approximators, where raw observations are processed through an LSTM layer followed by 2 layers of MLPs. This allows the agent to make decisions based on the whole history of raw observations. In this case, the network has to conduct representation learning and dynamic programming collectively. Our algorithm is compared with SAC-LSTM to demonstrate the effect of separating representation learning from dynamic programming. Note that in our algorithm, we apply pre-training of the first-impression model. For a fair comparison, we also perform pre-training for the alternative algorithm with the same epochs. For SAC-MLP and SAC-LSTM, pre-training is conducted on RL networks; while for SLAC, its model is pre-trained. The Pendlum and CartPole tasks are the classic control tasks for evaluating RL algorithms (Fig. 3, Left). The CartPole task requires learning of a policy that prevents the pole from falling down and keeps the cart from running away by applying a (1-dimensional) force to the cart, in which observable information is the coordinate of the cart, the angle of the pole, and their derivatives w.r.t time (i.e., velocities). For the Pendulum task, the agent needs to learn a policy to swing an inverse-pendulum up and to maintain it at the highest position in order to obtain more rewards. We are interested in classic control tasks because they are relatively easy to solve when fully observable, and thus the PO cases can highlight the representation learning problem. Experiments were performed in these two tasks, as well as their PO versions, in which either velocities cannot be observed or only velocities can be observed. The latter case is meaningful in real-life applications because an agent may not be able to perceive its own position, but can estimate its speed. As expected, SAC-MLP failed to solve the PO tasks (Fig. 3). While our algorithm succeeded in learning to solve all these tasks, SAC-LSTM showed poorer performance in some of them. In particular, in the pendulum task with only angular velocity observable, SAC-LSTM may suffer from the periodicity of the angle. SLAC performed well in the CartPole tasks, but showed less satisfactory sample efficiency in the Pendulum tasks. To examine performance of the proposed algorithm in more challenging control tasks with higher degrees of freedom (DOF), we also evaluated performance of the proposed algorithm in the OpenAI Roboschool environments . The Roboschool environments include a number of continuous robotic control tasks, such as teaching a multiple-joint robot to walk as fast as possible without falling down (Fig. 4, Left). The original Roboschool environments are nearly fully observable since observations include the robot's coordinates and (trigonometric functions of) joint angles, as well as (angular and coordinate) velocities. As in the PO classic control tasks, we also performed experiments in the PO versions of the Roboschool environments. Using our algorithm, experimental (Fig. 4) demonstrated substantial policy improvement in all PO tasks (visualization of the trained agents is in Appendix D). In some PO cases, the agents achieved comparable performance to that in fully observable cases. For tasks with unobserved velocities, our algorithm performed similarly to SAC-LSTM. This is because velocities can be simply estimated by one-step differences in robot coordinates and joint angles, which eases representation learning. However, in environments where only velocities can be observed, our algorithm significantly outperformed SAC-LSTM, presumably because SAC-LSTM is less efficient at encoding underlying states from velocity observations. Also, we found that learning of a SLAC agent was unstable, i.e., it sometimes could acquire a near-optimal policy, but often its policy converged to a poor one. Thus, average performance of SLAC was less promising than ours in most of the PO robotic control tasks. Another common type of PO task requires long-term memorization of past events. To solve these tasks, an agent needs to learn to extract and to remember critical information from the whole history of raw observations. Therefore, we also examined our algorithm and other alternatives in a long-term memorization task known as the sequential target reaching task , in which a robot agent needs to reach 3 different targets in a certain sequence (Fig. 5, Left). The robot can control its two wheels to move or turn, and will get one-step small, medium, and large rewards when it reaches the first, second, and third targets, respectively, in the correct sequence. The robot senses distances and angles from the 3 targets, but does not receive any signal indicating which target to reach. In each episode, the robot's initial position and those of the three targets are randomly initialized. In order to obtain rewards, the agent needs to infer the current correct target using historical observations. We found that agents using our algorithm achieved almost 100% success rate (reaching 3 targets in the correct sequence within maximum steps). SAC-LSTM also achieved similar success rate after convergence, but spent more training steps learning to encode underlying goal-related information from sequential observations. Also, SLAC struggled hard to solve this task since its actor only received a limited steps of observations, making it difficult to infer the correct target. One of the most concerned problems of our algorithm is that input of the RL controllers can experience representation change, because the keep-learning model is not guaranteed to converge if novel observation appears due to improved policy (e.g. for a hopper robot, "in-the-air" state can only happen after it learns to hop). To empirically investigate how convergence of the keep-learning VRM affect policy improvement, we plot the loss functions (negative ELBOs) of the the keep-learning VRM for 3 example tasks (Fig. 6). For a simpler task (CartPole), the policy was already near optimal before the VRM fully converged. We also saw that the policy was gradually improved after the VRM mostly converged (RoboschoolAnt -no velocities), and that the policy and the VRM were being improved in parallel (RoboschoolAnt -velocities only). The suggested that policy could be improved with sufficient sample efficiency even the keep-learning VRM did not converge. This can be explained by that the RL controller also extract information from the first-impression model and the raw observations, which did not experience representation change during RL. Indeed, our ablation study showed performance degradation in many tasks without the first-impression VRM (Appendix C). In this paper, we proposed a variational recurrent model for learning to represent underlying states of PO environments and the corresponding algorithm for solving POMDPs. Our experimental demonstrate effectiveness of the proposed algorithm in tasks in which underlying states cannot be simply inferred using a short sequence of observations. Our work can be considered an attempt to understand how RL benefits from stochastic Bayesian inference of state-transitions, which actually happens in the brain , but has been considered less often in RL studies. We used stochastic models in this work which we actually found perform better than deterministic ones, even through the environments we used are deterministic (Appendix C). The VRNN can be replaced with other alternatives to potentially improve performance, although developing model architecture is beyond the scope of the current study. Moreover, a recent study showed a novel way of inference using back-propagation of prediction errors, which may also benefit our future studies. Many researchers think that there are two distinct systems for model-based and model-free RL in the brain (Gläscher et al., 2010;) and a number of studies investigated how and when the brain switches between them . suggested that the hippocampus can learn a successor representation of the environment that benefits both model-free and model-based RL, contrary to the aforementioned conventional view. We further propose another possibility, that a model is learned, but not used for planning or dreaming. This blurs the distinction between model-based and model-free RL. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International Conference on Machine Learning, pp. 1856-1865, 2018a. In this section we describe the details of implementing our algorithm as well as the alternative ones. Summaries of hyperparameters can be found in Table 1 and 2. The first-impression model and the keep-learning model adopted the same architecture. Size of d and z is 256 and 64, respectively. We used one-hidden-layer fully-connected networks with 128 hidden neurons for the inference models µ φ,t, σ 2 φ,t = φ(x t, d t−1, a t−1), as well as for µ θ,t, σ 2 θ,t = θ prior (d t−1, a t−1) in the generative models. For the decoder µ x,t, σ 2 x,t = θ decoder (z t, d t−1) in the generative models, we used 2-layers MLPs with 128 neurons in each layer. The input processing layer f x is also an one-layer MLP with size-128. For all the Gaussian variables, output functions for mean are linear and output functions for variance are softplus. Other activation functions of the VRMs are tanh. The RL controllers are the same as those in SAC-MLP (Section A.2.1) except that network inputs are raw observations together with the RNN states from the first-impression model and the keep-learning model. To train the VRMs, one can use a number of entire episodes as a mini-batch, using zero initial states, as in. However, when tackling with long episodes (e.g. there can be 1,000 steps in each episode in the robotic control tasks we used) or even infinite-horizon problems, the computation consumption will be huge in back-propagation through time (BPTT). For better computation efficiency, we used 4 length-64 sequences for training the RNNs, and applied the burn-in method for providing the initial states , or more specifically, unrolling the RNNs using a portion of the replay sequence (burn-in period, up to 64 steps in our case) from zero initial states. We assume that proper initial states can be obtained in this way. This is crucial for the tasks that require long-term memorization, and is helpful to reduce bias introduces by incorrect initial states in general cases. A.2.1 SAC-MLP We followed the original implementation of SAC in (a) including hyperparameters. However, we also applied automatic learning of the entropy coefficient α (inverse of the the reward scale in Haarnoja et al. (2018a) ) as introduced by the authors in Haarnoja et al. (2018b) to avoid tuning the reward scale for each task. To apply recurrency to SAC's function approximators, we added an LSTM network with size-256 receiving raw observations as input. The function approximators of actor and critic were the same as those in SAC except receiving the LSTM's output as input. The gradients can pass through the LSTM so that the training of the LSTM and MLPs were synchronized. The training the network also followed Section A.1.2. We mostly followed the implementation of SLAC explained in the authors' paper . One modification is that since their work was using pixels as observations, convolutional neural networks (CNN) and transposed CNNs were chosen for input feature extracting and output decoding layers; in our case, we replaced the CNN and transposed CNNs by 2-layers MLPs with 256 units in each layer. In addition, the authors set the output variance σ 2 y,t for each image pixel as 0.1. However, σ 2 y,t = 0.1 can be too large for joint states/velocities as observations. We found that it will lead to better performance by setting σ y,t as trainable parameters (as that in our algorithm). We also used a 2-layer MLP with 256 units for approximating σ y (x t, d t−1). To avoid network weights being divergent, all the activation functions of the model were tanh except those for outputs. For the robotic control tasks and the Pendulum task, we used environments (and modified them for PO versions) from OpenAI Gym . The CartPole environment with a continuous action space was from , and the codes for the sequential target reaching tasks were provided by the authors . In the no-velocities cases, velocity information was removed from raw observations; while in the velocities-only cases, only velocity information was retained in raw observations. We summarize key information of each environment in Table 3. The performance curves were obtained in evaluation phases in which agents used same policy but did not update networks or record state-transition data. Each experiment was repeated using 5 different random seeds. This section demonstrated a ablation study in which we compared the performance of the proposed algorithm to the same but with some modification: • With a single VRM. In this case, we used only one VRM and applied both pre-training and smooth update to it. • Only first-impression model. In this case, only the first-impression model was used and pre-trained. • Only keep-learning model. In this case, only the keep-learning model was used and smooth-update was applied. • Deterministic model. In this case, the first-imporession model and the keep-learning model were deterministic RNNs which learned to model the state-transitions by minimizing mean-square error between prediction and observations instead of ELBO. The network architecture was mostly the same as the VRM expect that the inference model and the generative model were merged into a deterministic one. The learning curves are shown in Fig. 7. It can be seen that the proposed algorithm consistently performed similar as or better than the modified ones. Here we show actual movements of the trained robots in the PO robotic control tasks (Fig. 8). It can be seen that the robots succeeded in learning to hop or walk, although their policy may be sub-optimal. As we discussed in Section 2, our algorithm relies mostly on encoding capacity of models, but does not require models to make accurate prediction of future observations. Fig. 9 shows open-loop (using the inference model to compute the latent variable z) and close-loop (purely using the generative model) prediction of raw observation by the keep-learning models of randomly selected trained agents. Here we showcase "RoboschoolHopper -velocities only" and "Pendulum -no velocities" because in these tasks our algorithm achieved similar performance to those in fully-observable versions (Fig. 4), although the prediction accuracy of the models was imperfect. To empirically show how choice of hyperparameters of the VRMs affect RL performance, we conducted experiments using hyperparameters different from those used in the main study. More specifically, the learning rate for both VRMs was randomly selected from {0.0004, 0.0006, 0.0008, 0.001} and the sequence length was randomly selected from {16, 32, 64} (the batch size was 256/(sequence length) to ensure that the total number of samples in a batch was 256 which matched with the alternative approaches). The other hyperparameters were unchanged. The can be checked in Fig 10 for all the environments we used. The overall performance did not significantly change using different, random hyperparameters of the VRMs, although we could observe significant performance improvement (e.g. RoboshoolWalker2d) or degradation (e.g. RoboshoolHopper -velocities only) in a few tasks using different haperparameters. Therefore, the representation learning part (VRMs) of our algorithm does not suffer from high sensitivity to hyperparameters. This can be explained by the fact that we do not use a bootstrapping (e.g. the estimation of targets of value functions depends on the estimation of value functions) update rule to train the VRMs. G SCALABILITY Table 4 showed scalability of our algorithm and the alternative ones. Table 4: Wall-clock time and number of parameters of our algorithm and the alternative ones. The working environment was a desktop computer using Intel i7-6850K CPU and the task is "Velocitiesonly RoboschoolHopper". The wall-clock time include training the first-impression VRM or pretrainings. | A deep RL algorithm for solving POMDPs by auto-encoding the underlying states using a variational recurrent model | 1,281 | scitldr |
This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning (RL). The setup is as follows: given an episodic task and a finite number of off-policy RL algorithms, a meta-algorithm has to decide which RL algorithm is in control during the next episode so as to maximize the expected return. The article presents a novel meta-algorithm, called Epochal Stochastic Bandit Algorithm Selection (ESBAS). Its principle is to freeze the policy updates at each epoch, and to leave a rebooted stochastic bandit in charge of the algorithm selection. Under some assumptions, a thorough theoretical analysis demonstrates its near-optimality considering the structural sampling budget limitations. ESBAS is first empirically evaluated on a dialogue task where it is shown to outperform each individual algorithm in most configurations. ESBAS is then adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS. SSBAS is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on an Atari game, where it improves the performance by a wide margin. Reinforcement Learning (RL, BID18) is a machine learning framework for optimising the behaviour of an agent interacting with an unknown environment. For the most practical problems, such as dialogue or robotics, trajectory collection is costly and sample efficiency is the main key performance indicator. Consequently, when applying RL to a new problem, one must carefully choose in advance a model, a representation, an optimisation technique and their parameters. Facing the complexity of choice, RL and domain expertise is not sufficient. Confronted to the cost of data, the popular trial and error approach shows its limits. We develop an online learning version (; BID1 of Algorithm Selection (AS, BID15 ; BID17 BID5). It consists in testing several algorithms on the task and in selecting the best one at a given time. For clarity, throughout the whole article, the algorithm selector is called a meta-algorithm, and the set of algorithms available to the meta-algorithm is called a portfolio. The meta-algorithm maximises an objective function such as the RL return. Beyond the sample efficiency objective, the online AS approach besides addresses four practical problems for online RL-based systems. First, it improves robustness: if an algorithm fails to terminate, or outputs to an aberrant policy, it will be dismissed and others will be selected instead. Second, convergence guarantees and empirical efficiency may be united by covering the empirically efficient algorithms with slower algorithms that have convergence guarantees. Third, it enables curriculum learning: shallow models control the policy in the early stages, while deep models discover the best solution in late stages. And four, it allows to define an objective function that is not an RL return. A fair algorithm selection implies a fair budget allocation between the algorithms, so that they can be equitably evaluated and compared. In order to comply with this requirement, the reinforcement algorithms in the portfolio are assumed to be off-policy, and are trained on every trajectory, regardless which algorithm controls it. Section 2 provides a unifying view of RL algorithms, that allows information sharing between algorithms, whatever their state representations and optimisation techniques. It also formalises the problem of online selection of off-policy RL algorithms. Next, Section 3 presents the Epochal Stochastic Bandit AS (ESBAS), a novel meta-algorithm addressing the online off-policy RL AS problem. Its principle relies on a doubling trick: it divides the time-scale into epochs of exponential length inside which the algorithms are not allowed to update their policies. During each epoch, the algorithms have therefore a constant policy and a stochastic multi-armed bandit can be in charge of the AS with strong pseudo-regret theoretical guaranties. A thorough theoretical analysis provides for ESBAS upper bounds. Then, Section 4 evaluates ESBAS on a dialogue task where it outperforms each individual algorithm in most configurations. Afterwards, in Section 5, ESBAS, which is initially designed for a growing batch RL setting, is adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS. It is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on Q*bert, where running several DQN with different network size and depth in parallel allows to improve the final performance by a wide margin. Finally, Section 6 concludes the paper with prospective ideas of improvement. Stochastic environment a(t) o(t + 1) r(t + 1)Figure 1: RL framework: after performing action a(t), the agent perceives observation o(t + 1) and receives reward r(t + 1).The goal of this section is to enable information sharing between algorithms, even though they are considered as black boxes. We propose to share their trajectories expressed in a universal format: the interaction process. Reinforcement Learning (RL) consists in learning through trial and error to control an agent behaviour in a stochastic environment: at each time step t ∈ N, the agent performs an action a(t) ∈ A, and then perceives from its environment a signal o(t) ∈ Ω called observation, and receives a reward r(t) ∈ R, bounded between R min and R max. Figure 1 illustrates the RL framework. This interaction process is not Markovian: the agent may have an internal memory. In this article, the RL problem is assumed to be episodic. Let us introduce two time scales with different notations. First, let us define meta-time as the time scale for AS: at one meta-time τ corresponds a meta-algorithm decision, i.e. the choice of an algorithm and the generation of a full episode controlled with the policy determined by the chosen algorithm. Its realisation is called a trajectory. Second, RL-time is defined as the time scale inside a trajectory, at one RL-time t corresponds one triplet composed of an observation, an action, and a reward. Let E denote the space of trajectories. A trajectory ε τ ∈ E collected at meta-time τ is formalised as a sequence of (observation, action, reward) triplets: ε τ = o τ (t), a τ (t), r τ (t) t∈ 1,|ετ | ∈ E, where |ε τ | is the length of trajectory ε τ. The objective is, given a discount factor 0 ≤ γ < 1, to generate trajectories with high discounted cumulative reward, also called return, and noted µ(ε τ) = |ετ | t=1 γ t−1 r τ (t). Since γ < 1 and R is bounded, the return is also bounded. The trajectory set at meta-time T is denoted by D T = {ε τ} τ ∈ 1,T ∈ E T. A sub-trajectory of ε τ until RL-time t is called the history at RL-time t and written ε τ (t) with t ≤ |ε τ |. The history records what happened in episode ε τ until RL-time t: ε τ (t) = o τ (t), a τ (t), r τ (t) t ∈ 1,t ∈ E.The goal of each RL algorithm α is to find a policy π *: E → A which yields optimal expected returns. Such an algorithm α is viewed as a black box that takes as an input a trajectory set D ∈ E +, where E + is the ensemble of trajectory sets of undetermined size: DISPLAYFORM0, and that outputs a policy π α D. Consequently, a RL algorithm is formalised as follows: α: DISPLAYFORM1 Such a high level definition of the RL algorithms allows to share trajectories between algorithms: a trajectory as a sequence of observations, actions, and rewards can be interpreted by any algorithm in its own decision process and state representation. For instance, RL algorithms classically rely on an MDP defined on a explicit or implicit state space representation DISPLAYFORM2 on the trajectories projected on its state space representation. Off-policy RL optimisation techniques compatible with this approach are numerous in the literature BID20; BID10. As well, any post-treatment of the state set, any alternative decision process BID9, and any off-policy algorithm may be used. The algorithms are defined here as black boxes and the considered meta-algorithms will be indifferent to how the algorithms compute their policies, granted they satisfy the off-policy assumption. Pseudo-code 1: Online RL AS setting DISPLAYFORM0 Generate trajectory ε τ with policy π DISPLAYFORM1 The online learning approach is tackled in this article: different algorithms are experienced and evaluated during the data collection. Since it boils down to a classical exploration/exploitation trade-off, multi-armed bandit have been used for combinatorial search AS (; BID1 and evolutionary algorithm meta-learning . The online AS problem for off-policy RL is novel and we define it as follows: DISPLAYFORM2 is the current trajectory set;• P = {α k} k∈ 1,K is the portfolio of off-policy RL algorithms;• µ: E → R is the objective function, generally set as the RL return. Pseudo-code 1 formalises the online RL AS setting. A meta-algorithm is defined as a function from a trajectory set to the selection of an algorithm: σ: E + → P. The meta-algorithm is queried at each meta-time τ = |D τ −1 |+1, with input D τ −1, and it ouputs algorithm σ (D τ −1) = σ(τ) ∈ P controlling with its policy π σ(τ) Dτ−1 the generation of the trajectory ε τ in the stochastic environment. Figure 2 illustrates the algorithm with a diagram flow. The final goal is to optimise the cumulative expected return. It is the expectation of the sum of rewards obtained after a run of T trajectories: DISPLAYFORM3 DISPLAYFORM4 Return µ(ε τ): DISPLAYFORM5 Figure 2: Algorithm selection for reinforcement learning flow diagram expectations. The outside expectation E σ assumes the meta-algorithm σ fixed and averages over the trajectory set generation and the corresponding algorithms policies. The inside expectation Eµ assumes the policy fixed and averages over its possible trajectories in the stochastic environment. Nota bene: there are three levels of decision: meta-algorithm σ selects algorithm α that computes policy π that is in control. In this paper, the focus is at the meta-algorithm level. In this paper, we focus on sample efficiency, where a sample is meant to be a trajectory. This is motivated by the following reasons. First, in most real-world systems, the major regret is on the task failure. The time expenditure is only a secondary concern that is already assessed by the discount factor dependency in the return. Second, it would be inconsistent to consider regret on a different time scale as the algorithm selection. Also, policy selection on non-episodic RL is known as a very difficult task where state-of-the-art algorithms only obtain regrets of the order of O(√ T log(T)) on stationary policies. Third, the regret on the decision steps cannot be assessed, since the rewards are discounted in the RL objective function. And finally, the bandit rewards (defined as the objective function in Section 2.2) may account for the length of the episode. In order to evaluate the meta-algorithms, let us formulate two additional notations. First, the optimal expected return Eµ * ∞ is defined as the highest expected return achievable by a policy of an algorithm in portfolio P. Second, for every algorithm α in the portfolio, let us define σ α as its canonical metaalgorithm, i.e. the meta-algorithm that always selects algorithm α: ∀τ, σ α (τ) = α. The absolute pseudo-regret ρ σ abs (T) defines the regret as the loss for not having controlled the trajectory with an optimal policy: DISPLAYFORM0 It is worth noting that an optimal meta-algorithm will unlikely yield a null regret because a large part of the absolute pseudo-regret is caused by the sub-optimality of the algorithm policies when the trajectory set is still of limited size. Indeed, the absolute pseudo-regret considers the regret for not selecting an optimal policy: it takes into account both the pseudo-regret of not selecting the best algorithm and the pseudo-regret of the algorithms for not finding an optimal policy. Since the metaalgorithm does not interfere with the training of policies, it ought not account for the pseudo-regret related to the latter. Related to AS for RL, BID16 use meta-learning to tune a fixed RL algorithm in order to fit observed animal behaviour, which is a very different problem to ours. In Cauwet et al. FORMULA9; BID8, the RL AS problem is solved with a portfolio composed of online RL algorithms. The main limitation from these works relies on the fact that on-policy algorithms were used, which prevents them from sharing trajectories among algorithms . Meta-learning specifically for the eligibility trace parameter has also been studied in BID21. BID19 study the learning process of RL algorithms and selects the best one for learning faster on a new task. This work is related to batch AS.An intuitive way to solve the AS problem is to consider algorithms as arms in a multi-armed bandit setting. The bandit meta-algorithm selects the algorithm controlling the next trajectory ε and the objective function µ(ε) constitutes the reward of the bandit. The aim of prediction with expert advice is to minimise the regret against the best expert of a set of predefined experts. When the experts learn during time, their performances evolve and hence the sequence of expert rewards is non-stationary. The exponential weight algorithms BID3 ) are designed for prediction with expert advice when the sequence of rewards of experts is generated by an oblivious adversary. This approach has been extended for competing against the best sequence of experts by adding in the update of weights a forgetting factor proportional to the mean reward (see Exp3.S in BID3), or by combining Exp3 with a concept drift detector BID0.The exponential weight algorithms have been extended to the case where the rewards are generated by any sequence of stochastic processes of unknown means .A recent article uses Exp3.S algorithm BID3 for curriculum learning. The drawback of adversarial approaches is that they lead to very conservative algorithms which has to work against an adversary. For handeling non-stationarity of rewards, another way is to assume that the rewards generated by each arm are not i.i.d., but are governed by some more complex stochastic processes. The stochastic bandit algorithm such as UCB can be extended to the case of switching bandits using a discount factor or a window to forget the past. Restless bandits BID22; BID13 assume that a Markov chain governs the reward of arms independently of whether the learner is played or not the arm. These classes of bandit algorithms are not designed for experts that learn and hence evolve at each time step. Our approach takes the opposite view of adversarial bandits: we design a stochastic algorithm specifically for curriculum learning based on the doubling trick. This reduction of the algorithm selection problem into several stochastic bandit problems with doubling time horizon begins to favour fast algorithms, and then more efficient algorithms. ESBAS description -To solve the off-policy RL AS problem, we propose a novel meta-algorithm called Epochal Stochastic Bandit AS (ESBAS). Because of the non-stationarity induced by the algorithm learning, the stochastic bandit cannot directly select algorithms. Instead, the stochastic bandit can choose fixed policies. To comply with this constraint, the meta-time scale is divided into epochs inside which the algorithms policies cannot be updated: the algorithms optimise their policies only when epochs start, in such a way that the policies are constant inside each epoch. This can be seen as a doubling trick. As a consequence and since the returns are bounded, at each new epoch, the problem can rigorously be cast into an independent stochastic K-armed bandit Ξ, with K = |P|. Data: D 0, P, µ: the online RL AS setting DISPLAYFORM0 n kmax + 1 n kmax ← n kmax + 1 and n ← n + 1 end endThe ESBAS meta-algorithm is formally sketched in Pseudo-code 2 embedding UCB1 Auer et al. (2002a) as the stochastic Karmed bandit Ξ. The meta-algorithm takes as an input the set of algorithms in the portfolio. Meta-time scale is fragmented into epochs of exponential size. The β th epoch lasts 2 β metatime steps, so that, at meta-time τ = 2 β, epoch β starts. At the beginning of each epoch, the ESBAS meta-algorithm asks each algorithm in the portfolio to update their current policy. Inside an epoch, the policy is never updated anymore. At the beginning of each epoch, a new Ξ instance is reset and run. During the whole epoch, Ξ selects at each meta-time step the algorithm in control of the next trajectory. Theoretical analysis -ESBAS intends to minimise the regret for not choosing the algorithm yielding the maximal return at a given meta-time τ. It is short-sighted: it does not intend to optimise the algorithms learning. We define the short-sighted pseudo-regret as follows: DISPLAYFORM1 The short-sighted pseudo-regret depends on the gaps ∆ α β: the difference of expected return between the best algorithm during epoch β and algorithm α. The smallest non null gap at epoch β is noted ∆ † β. We write its limit when β tends to infinity with ∆ † ∞. Analysis relies on three assumptions that are formalised in Section B of the supplementary material. First, more data is better data states that algorithms improve on average from having additional data. Second, order compatibility assumes that, if a dataset enables to generate a better policy than another dataset, then, on average, adding new samples to both datasets should not change the dataset ordering. Third and last, let us introduce and discuss more in depth the learning is fair assumption. The fairness of budget distribution has been formalised in. It is the property stating that every algorithm in the portfolio has as much resources as the others, in terms of computational time and data. It is an issue in most online AS problems, since the algorithm that has been the most selected has the most data, and therefore must be the most advanced one. A way to circumvent this issue is to select them equally, but, in an online setting, the goal of AS is precisely to select the best algorithm as often as possible. Our answer is to require that all algorithms in the portfolio are learning off-policy, i.e. without bias induced by the behavioural policy used in the learning dataset. By assuming that all algorithms learn off-policy, we allow information sharing Cauwet et al. FORMULA9 between algorithms. They share the trajectories they generate. As a consequence, we can assume that every algorithm, the least or the most selected ones, will learn from the same trajectory set. Therefore, the control unbalance does not directly lead to unfairness in algorithms performances: all algorithms learn equally from all trajectories. However, unbalance might still remain in the exploration strategy if, for instance, an algorithm takes more benefit from the exploration it has chosen than the one chosen by another algorithm. For analysis purposes, we assumes the complete fairness of AS.Based on those assumptions, three theorems show that ESBAS absolute pseudo-regret can be expressed in function of the absolute pseudo-regret of the best canonical algorithm and ESBAS shortsighted pseudo-regret. They also provide upper bounds on the ESBAS short-sighted pseudo-regret as a function of the order of magnitude of the gap ∆ † β. Indeed, the stochastic multi-armed bandit algorithms have bounds that are, counter-intuitively, inversely proportional to the gaps between the best arm and the other ones. In particular if ∆ † β tends to 0, the algorithm selection might prove to be difficult, depending on the order of magnitude of it tending to 0. The full theoretical analysis can be found in the supplementary material, Section B. We provide here an intuitive overlook of its . TAB0 numerically reports those bounds for a two-fold portfolio, depending on the nature of the algorithms. It must be read by line. According to the first column: the order of magnitude of ∆ † β, the ESBAS short-sighted pseudo-regret bounds are displayed in the second column, and the third and fourth columns display the ESBAS absolute pseudo-regret bounds also depending on the order of magnitude of the best canonical algorithm absolute pseudo-regret: ρ σ * Regarding the short-sighted upper bounds, the main appears in the last line, when the algorithms converge to policies with different performance: ESBAS converges with a regret in O log 2 (T)/∆ † ∞. Also, one should notice that the bounds of the first two lines are obtained by summing the gaps: this means that the algorithms are perceived equally good and that their gap goes beyond the threshold of distinguishability. This threshold is structurally at ∆ † DISPLAYFORM0 The impossibility to determine which is the better algorithm is interpreted in Cauwet et al. FORMULA9 as a budget issue. The meta-time necessary to distinguish through evaluation arms that are ∆ † β apart takes Θ(1/∆ †2 β) meta-time steps. If the budget is inferior, then we are under the distinguishability threshold and the best bounds are obtained by summing up the gaps. As a consequence, if ∆ † DISPLAYFORM1. However, the budget, i.e. the length of epoch β starting at meta-time T = 2 β, equals DISPLAYFORM2 can therefore be considered as the structural limit of distinguishability between the algorithms. Additionally, the absolute upper bounds are logarithmic in the best case and still inferior to O(√ T) in the worst case, which compares favorably with those of discounted UCB and Exp3.S in O(T log(T)) and Rexp3 in O(T 2/3), or the RL with Policy Advice's regret bounds of O(√ T log(T)) on stationary policies Azar et al. FORMULA9 (on non-episodic RL tasks). DISPLAYFORM3, and c DISPLAYFORM4 ESBAS is particularly designed for RL tasks when it is impossible to update the policy after every transition or episode. Policy update is very costly in most real-world applications, such as dialogue systems for which a growing batch setting is preferred BID6. ESBAS practical efficiency is therefore illustrated on a dialogue negotiation game BID7 ) that involves two players: the system p s and a user p u. Their goal is to find an agreement among 4 alternative options. At each dialogue, for each option η, players have a private uniformly drawn cost ν Table 2 of the supplementary material. Figures 3a and 3b plot the typical curves obtained with ESBAS selecting from a portfolio of two learning algorithms. On Figure 3a, the ESBAS curve tends to reach more or less the best algorithm in each point as expected. Surprisingly, Figure 3b reveals that the algorithm selection ratios are not very strong in favour of one or another at any time. Indeed, the variance in trajectory set collection makes simple better on some runs until the end. ESBAS proves to be efficient at selecting the best algorithm for each run and unexpectedly obtains a negative relative pseudo-regret of -90. Figures 3c and 3d plot the typical curves obtained with ESBAS selecting from a portfolio constituted of a learning algorithm and an algorithm with a deterministic and stationary policy. ESBAS succeeds in remaining close to the best algorithm at each epoch and saves 5361 return value for not selecting the constant algorithm, but overall yields a regret for not using only the best algorithm. ESBAS also performs well on larger portfolios of 8 learners (see Figure 3e) with negative relative pseudo-regrets: −10, even if the algorithms are, on average, almost selected uniformly as Figure 3f reveals. Each individual run may present different ratios, depending on the quality of the trained policies. ESBAS also offers some curriculum learning, but more importantly, early bad policies are avoided. Algorithms with a constant policy do not improve over time and the full reset of the K-multi armed bandit urges ESBAS to unnecessarily explore again and again the same underachieving algorithm. One easy way to circumvent this drawback is to use this knowledge and to not reset their arms. By operating this way, when the learning algorithm(s) start(s) outperforming the constant one, ESBAS simply neither exploits nor explores the constant algorithm anymore. Without arm reset for constant algorithms, ESBAS's learning curve follows perfectly the learning algorithm's learning curve when this one outperforms the constant algorithm and achieves strong negative relative pseudo-regrets. Again, the interested reader may refer to Table 2 in supplementary material for the numerical . Still, another harmful phenomenon may happen: the constant algorithm overrides the natural exploration of the learning algorithm in the early stages, and when the learning algorithm finally outperforms the constant algorithm, its exploration parameter is already low. This can be observed in experiments with constant algorithm of expected return inferior to 1, as reported in Table 2. We propose to adapt ESBAS to a true online setting where algorithms update their policies after each transition. The stochastic bandit is now trained on a sliding window containing the last τ /2 selections. Even though the arms are not stationary over this window, the bandit eventually forgets the oldest arm pulls. This algorithm is called SSBAS for Sliding Stochastic Bandit AS. Despite the lack of theoretical convergence bounds, we demonstrate on two domains and two different meta-optimisation tasks that SSBAS impressively outperforming all algorithms in the portfolio by a wide margin. The goal here is to demonstrate that SSBAS can perform efficient hyper-parameter optimisation on a simple tabular domain: a 5x5 gridworld problem (see Figure 4), where the goal is to collect the fruits placed at each corner as fast as possible. The episodes terminate when all fruits have been collected or after 100 transitions. The objective function µ used to optimise the stochastic bandit Ψ is no longer the RL return, but the time spent to collect all the fruits (200 in case of it did not). The agent has 18 possible positions and there are 2 4 − 1 = 15 non-terminal fruits configurations, ing in 270 states. The action set is A = {N, E, S, W}. The reward function mean is 1 when eating a fruit, 0 otherwise. The reward function is corrupted with a strong Gaussian white noise of variance ζ 2 = 1. The portfolio is composed of 4 Q-learning algorithms varying from each other by their learning rates: {0.001, 0.01, 0.1, 0.5}. They all have the same linearly annealing τ -greedy exploration. The selection ratios displayed in 5 show that SSBAS selected the algorithm with the highest (0.5) learning rate in the first stages, enabling to propagate efficiently the reward signal through the visited states, then, over time preferentially chooses the algorithm with a learning rate of 0.01, which is less sensible to the reward noise, finally, SSBAS favours the algorithm with the finest learning rate (0.001). After 1 million episodes, SSBAS enables to save half a transition per episode on average as compared to the best fixed learning rate value (0.1), and two transitions against the worst fixed learning rate in the portfolio (0.001).We compare SSBAS to the efficiency of a linearly annealing learning rate: 1/(1 + 0.0001τ): SSBAS performs under 21 steps on average after 10 5, while the linearly annealing learning rate algorithm still performs a bit over 21 steps after 10 6 steps. This is because SSBAS can adapt the best performing learning rate over time. We also compare SSBAS performance to Exp3.S's performance BID3. The analysis of the algorithm selection history shows that Exp3.S is too conservative and fails at efficiently select the shallowest algorithms in the beginning of the learning (number of steps at the 10000 th episode: 28.3 for SSBAS vs 39.1 for Exp3.S), producing trajectories of lesser quality and therefore critically delaying the general training of all algorithms (number of steps at the 100000 th episode: 20.9 for SSBAS vs 22.5 for Exp3.S). Overall, SSBAS outperforms Exp3.S by a significant and wide margin: number of steps averaged over all the training 10^5 episodes: 28.7 for SSBAS vs 33.6 for Exp3.S. FORMULA9 ) and more precisely the game Q*bert (see a frame on Figure 6), where the goal is to step once on each block. Then a new similar level starts. In later levels, one needs to step twice on each block, and even later stepping again on the same blocks will cancel the colour change. We used three different settings of DQN instances: small uses the setting described in BID10, large uses the setting in BID11, and finally huge uses an even larger network (see the supplementary material, Section C.2 for details). DQN is known to reach a near-human level performance at Q*bert. Our SSBAS instance runs 6 algorithms with 2 different random initialisations of each DQN setting. Disclaimer: contrarily to other experiments, each curve is the of a single run, and the improvement might be aleatory. Indeed, the DQN training is very long and SSBAS needs to train all the models in parallel. A more computationally-efficient solution might be to use the same architecture as BID14.7 reveals that SSBAS experiences a slight delay keeping in touch with the best setting performance during the initial learning phase, but, surprisingly, finds a better policy than the single algorithms in its portfolio and than the ones reported in the previous DQN articles. We observe that the large setting is surprisingly by far the worst one on the Q*bert task, implying the difficulty to predict which model is the most efficient for a new task. SSBAS allows to select online the best one. In this article, we tackle the problem of selecting online off-policy RL algorithms. The problem is formalised as follows: from a fixed portfolio of algorithms, a meta-algorithm learns which one performs the best on the task at hand. Fairness of algorithm evaluation implies that the RL algorithms learn off-policy. ESBAS, a novel meta-algorithm, is proposed. Its principle is to divide the meta-time scale into epochs. Algorithms are allowed to update their policies only at the start each epoch. As the policies are constant inside each epoch, the problem can be cast into a stochastic multi-armed bandit. An implementation is detailed and a theoretical analysis leads to upper bounds on the regrets. ESBAS is designed for the growing batch RL setting. This limited online setting is required in many real-world applications where updating the policy requires a lot of resources. Experiments are first led on a negotiation dialogue game, interacting with a human data-built simulated user. In most settings, not only ESBAS demonstrates its efficiency to select the best algorithm, but it also outperforms the best algorithm in the portfolio thanks to curriculum learning, and variance reduction similar to that of Ensemble Learning. Then, ESBAS is adapted to a full online setting, where algorithms are allowed to update after each transition. This meta-algorithm, called SSBAS, is empirically validated on a fruit collection task where it performs efficient hyper-parameter optimisation. SSBAS is also evaluated on the Q*bert Atari game, where it achieves a substantial improvement over the single algorithm counterparts. We interpret ESBAS/SSBAS's success at reliably outperforming the best algorithm in the portfolio as the of the four following potential added values. First, curriculum learning: ESBAS/SSBAS selects the algorithm that is the most fitted with the data size. This property allows for instance to use shallow algorithms when having only a few data and deep algorithms once collected a lot. Second, diversified policies: ESBAS/SSBAS computes and experiments several policies. Those diversified policies generate trajectories that are less redundant, and therefore more informational. As a , the policies trained on these trajectories should be more efficient. Third, robustness: if one algorithm fails at finding good policies, it will soon be discarded. This property prevents the agent from repeating again and again the same obvious mistakes. Four and last, run adaptation: of course, there has to be an algorithm that is the best on average for one given task at one given meta-time. But depending on the variance in the trajectory collection, it did not necessarily train the best policy for each run. The ESBAS/SSBAS meta-algorithm tries and selects the algorithm that is the best at each run. Some of those properties are inherited by algorithm selection similarity with ensemble learning . BID23 uses a vote amongst the algorithms to decide the control of the next transition. Instead, ESBAS/SSBAS selects the best performing algorithm. Regarding the portfolio design, it mostly depends on the available computational power per sample ratio. For practical implementations, we recommend to limit the use of two highly demanding algorithms, paired with several faster algorithms that can take care of first learning stages, and to use algorithms that are diverse regarding models, hypotheses, etc. Adding two algorithms that are too similar adds inertia, while they are likely to not be distinguishable by ESBAS/SSBAS. More detailed recommendations for building an efficient RL portfolio are left for future work. Speech recognition score Section C.1.1 Normal distribution of centre x and variance v 2 Section C.1.1 REFINSIST REFPROP(η), with η being the last proposed option Section C.1.1 REFNEWPROP REFPROP(η), with η being the best option that has not been proposed yet Section C.1.1 ACCEPT ACCEPT(η), with η being the last understood option proposition Section C. DISPLAYFORM0 Non-learning algorithm with average performance µ Section C.1.2 ζ Number of noisy features added to the feature set Section C.1.2 Probability that X = x conditionally to Y = y Equation FORMULA11 B THEORETICAL ANALYSISThe theoretical aspects of algorithm selection for reinforcement learning in general, and Epochal Stochastic Bandit Algorithm Selection in particular, are thoroughly detailed in this section. The proofs of the Theorems are provided in Sections E, F, and G. We recall and formalise the absolute pseudo-regret definition provided in Section 2.3.Definition 1 (Absolute pseudo-regret). The absolute pseudo-regret ρ σ abs (T) compares the metaalgorithm's expected return with the optimal expected return: DISPLAYFORM0 The theoretical analysis is hindered by the fact that AS, not only directly influences the return distribution, but also the trajectory set distribution and therefore the policies learnt by algorithms for next trajectories, which will indirectly affect the future expected returns. In order to allow policy comparison, based on relation on trajectory sets they are derived from, our analysis relies on two assumptions. Assumption 1 (More data is better data). The algorithms train better policies with a larger trajectory set on average, whatever the algorithm that controlled the additional trajectory: DISPLAYFORM0 Assumption 1 states that algorithms are off-policy learners and that additional data cannot lead to performance degradation on average. An algorithm that is not off-policy could be biased by a specific behavioural policy and would therefore transgress this assumption. If an algorithm trains a better policy with one trajectory set than with another, then it remains the same, on average, after collecting an additional trajectory from any algorithm: DISPLAYFORM0 Assumption 2 states that a performance relation between two policies trained on two trajectory sets is preserved on average after adding another trajectory, whatever the behavioural policy used to generate it. From these two assumptions, Theorem 1 provides an upper bound in order of magnitude in function of the worst algorithm in the portfolio. It is verified for any meta-algorithm σ. Theorem 1 (Not worse than the worst). The absolute pseudo-regret is bounded by the worst algorithm absolute pseudo-regret in order of magnitude: DISPLAYFORM1 Contrarily to what the name of Theorem 1 suggests, a meta-algorithm might be worse than the worst algorithm (similarly, it can be better than the best algorithm), but not in order of magnitude. Its proof is rather complex for such an intuitive because, in order to control all the possible outcomes, one needs to translate the selections of algorithm α with meta-algorithm σ into the canonical meta-algorithm σ α's view. ESBAS intends to minimise the regret for not choosing the best algorithm at a given meta-time τ. It is short-sighted: it does not intend to optimise the algorithms learning. Definition 2 (Short-sighted pseudo-regret). The short-sighted pseudo-regret ρ σ ss (T) is the difference between the immediate best expected return algorithm and the one selected: DISPLAYFORM0 Theorem 2 (ESBAS short-sighted pseudo-regret). If the stochastic multi-armed bandit Ξ guarantees a regret of order of magnitude O(log(T)/∆ † β ), then: DISPLAYFORM1 Theorem 2 expresses in order of magnitude an upper bound for the short-sighted pseudo-regret of ESBAS. But first, let define the gaps: DISPLAYFORM2. It is the difference of expected return between the best algorithm during epoch β and algorithm α. The smallest non null gap at epoch β is noted: DISPLAYFORM3 if there is no non-null gap, the regret is null. Several upper bounds in order of magnitude on ρ ss (T) can be easily deduced from Theorem 2, depending on an order of magnitude of ∆ † β. See the corollaries in Section F.1, TAB0 and more generally Section 3 for a discussion. The short-sighted pseudo-regret optimality depends on the meta-algorithm itself. For instance, a poor deterministic algorithm might be optimal at meta-time τ but yield no new information, implying the same situation at meta-time τ + 1, and so on. Thus, a meta-algorithm that exclusively selects the deterministic algorithm would achieve a short-sighted pseudo-regret equal to 0, but selecting other algorithms are, in the long run, more efficient. Theorem 2 is a necessary step towards the absolute pseudo-regret analysis. The absolute pseudo-regret can be decomposed between the absolute pseudo-regret of the best canonical meta-algorithm (i.e. the algorithm that finds the best policy), the regret for not always selecting the best algorithm, and potentially not learning as fast, and the short-sighted regret: the regret for not gaining the returns granted by the best algorithm. This decomposition leads to Theorem 3 that provides an upper bound of the absolute pseudo-regret in function of the best canonical meta-algorithm, and the short-sighted pseudo-regret. The fairness of budget distribution is the property stating that every algorithm in the portfolio has as much resources as the others, in terms of computational time and data. Section 3 discusses it at length. For analysis purposes, Theorem 3 assumes the fairness of AS:Assumption 3 (Learning is fair). If one trajectory set is better than another for training one given algorithm, it is the same for other algorithms. DISPLAYFORM0 Theorem 3 (ESBAS absolute pseudo-regret upper bound). Under assumption 3, if the stochastic multi-armed bandit Ξ guarantees that the best arm has been selected in the T first episodes at least T /K times, with high probability 1 − δ T, with δ T ∈ O(1/T), then: DISPLAYFORM1 where meta-algorithm σ * selects exclusively algorithm α * = argmin α∈P ρ σ α abs (T). Successive and Median Elimination and Upper Confidence Bound BID2 under some conditions BID1 are examples of appropriate Ξ satisfying both conditions stated in Theorems 2 and 3. Again, see TAB0 and more generally Section 3 for a discussion of those bounds. C EXPERIMENTAL DETAILS C.1 DIALOGUE EXPERIMENTS DETAILS C.1.1 THE NEGOTIATION DIALOGUE GAME ESBAS practical efficiency is illustrated on a dialogue negotiation game BID7 that involves two players: the system p s and a user p u. Their goal is to find an agreement among 4 alternative options. At each dialogue, for each option η, players have a private uniformly drawn cost ν p η ∼ U to agree on it. Each player is considered fully empathetic to the other one. As a , if the players come to an agreement, the system's immediate reward at the end of the dialogue is R ps (s f) = 2 − ν ps η − ν pu η, where s f is the state reached by player p s at the end of the dialogue, and η is the agreed option; if the players fail to agree, the final immediate reward is R ps (s f) = 0, and finally, if one player misunderstands and agrees on a wrong option, the system gets the cost of selecting option η without the reward of successfully reaching an agreement: R ps (s f) = −ν ps η − ν pu η. Players act each one in turn, starting randomly by one or the other. They have four possible actions. First, REFPROP(η): the player makes a proposition: option η. If there was any option previously proposed by the other player, the player refuses it. Second, ASKREPEAT: the player asks the other player to repeat its proposition. Third, ACCEPT(η): the player accepts option η that was understood to be proposed by the other player. This act ends the dialogue either way: whether the understood proposition was the right one or not. Four, ENDDIAL: the player does not want to negotiate anymore and ends the dialogue with a null reward. Understanding through speech recognition of system p s is assumed to be noisy: with a sentence error rate of probability SER u s = 0.3, an error is made, and the system understands a random option instead of the one that was actually pronounced. In order to reflect human-machine dialogue asymmetry, the simulated user always understands what the system says: SER The system, and therefore the portfolio algorithms, have their action set restrained to five non parametric actions: REFINSIST ⇔ REFPROP(η t−1), η t−1 being the option lastly proposed by the system; REFNEWPROP ⇔ REFPROP(η), η being the preferred one after η t−1, ASKREPEAT, ACCEPT⇔ ACCEPT(η), η being the last understood option proposition and ENDDIAL. All learning algorithms are using Fitted-Q Iteration , with a linear parametrisation and an β -greedy exploration: β = 0.6 β, β being the epoch number. Six algorithms differing by their state space representation Φ α are considered:• simple: state space representation of four features: the constant feature φ 0 = 1, the last recognition score feature φ asr, the difference between the cost of the proposed option and the next best option φ dif, and finally an RL-time feature φ t = 0.1t DISPLAYFORM0 • fast: Φ α = {φ 0, φ asr, φ dif}.• simple-2: state space representation of ten second order polynomials of simple features. DISPLAYFORM1 • fast-2: state space representation of six second order polynomials of fast features. DISPLAYFORM2 • n-ζ-{simple/fast/simple-2/fast-2}: Versions of previous algorithms with ζ additional features of noise, randomly drawn from the uniform distribution in.• constant-µ: the algorithm follows a deterministic policy of average performance µ without exploration nor learning. Those constant policies are generated with simple-2 learning from a predefined batch of limited size. In all our experiments, ESBAS has been run with UCB parameter ξ = 1/4. We consider 12 epochs. The first and second epochs last 20 meta-time steps, then their lengths double at each new epoch, for a total of 40,920 meta-time steps and as many trajectories. γ is set to 0.9. The algorithms and ESBAS are playing with a stationary user simulator built through Imitation Learning from realhuman data. All the are averaged over 1000 runs. The performance figures plot the curves of algorithms individual performance σ α against the ESBAS portfolio control σ ESBAS in function of the epoch (the scale is therefore logarithmic in meta-time). The performance is the average return of the reinforcement learning return: it equals γ | | R ps (s f) in the negotiation game. The ratio figures plot the average algorithm selection proportions of ESBAS at each epoch. We define the relative pseudo regret as the difference between the ESBAS absolute pseudo-regret and the absolute pseudo-regret of the best canonical meta-algorithm. All relative pseudo-regrets, as well as the gain for not having chosen the worst algorithm in the portfolio, are provided in Table 2. Relative pseudo-regrets have a 95% confidence interval about ±6 ≈ ±1.5 × 10 −4per trajectory. Several show that, in practice, the assumptions are transgressed. Firstly, we observe that Assumption 3 is transgressed. Indeed, it states that if a trajectory set is better than another for a given algorithm, then it's the same for the other algorithms. Still, this assumption infringement does not seem to harm the experimental . It even seems to help in general: while this assumption is consistent curriculum learning, it is inconsistent with the run adaptation property advanced in Subsection 6 that states that an algorithm might be the best on some run and another one on other runs. And secondly, off-policy reinforcement learning algorithms exist, but in practice, we use state space representations that distort their off-policy property. However, experiments do not reveal any obvious bias related to the off/on-policiness of the trajectory set the algorithms train on. The three DQN networks (small, large, and huge) are built in a similar fashion, with relu activations at each layer except for the output layer which is linear, with RMSprop optimizer (ρ = 0.95 and = 10 no-op max 30• small has a first convolution layer with a 4x4 kernel and a 2x2 stride, and a second convolution layer with a 4x4 kernel and a 2x2 stride, followed by a dense layer of size 128, and finally the output layer is also dense.• large has a first convolution layer with a 8x8 kernel and a 4x4 stride, and a second convolution layer with a 4x4 kernel and a 2x2 stride, followed by a dense layer of size 256, and finally the output layer is also dense.• huge has a first convolution layer with a 8x8 kernel and a 4x4 stride, a second convolution layer with a 4x4 kernel and a 2x2 stride, and a third convolution layer with a 3x3 kernel and a 1x1 stride, followed by a dense layer of size 512, and finally the output layer is also dense. Portfolio w. best w. worst simple-2 + fast-2 35 -181 simple + n-1-simple-2 -73 -131 simple + n-1-simple 3 -2 simple-2 + n-1-simple-2 -12 -38 all-4 + constant-1.10 21 -2032 all-4 + constant-1.11-21 -1414 all-4 + constant-1.13-10 -561 all-4-28 -275 all-2-simple + constant-1.08-41 -2734 all-2-simple + constant-1.11-40 -2013 all-2-simple + constant-1.13-123 -799 all-2-simple -90 -121 fast + simple-2 -39 -256 simple-2 + constant-1.01 169 -5361 simple-2 + constant-1.11 53 -1380 simple-2 + constant-1.11 57 -1288 simple + constant-1.08 54 -2622 simple + constant-1.10 88 -1565 simple + constant-1.14 -6 -297 all-4 + all-4-n-1 + constant-1.09 25 -2308 all-4 + all-4-n-1 + constant-1.11 20 -1324 all-4 + all-4-n-1 + constant-1.14 -16 -348 all-4 + all-4-n-1 -10 -142 all-2-simple + all-2-n-1-simple -80 -181 4*n-2-simple -20 -20 4*n-3-simple -13 -13 8*n-1-simple-2 -22 -22 simple-2 + constant-0.97 (no reset) 113 -7131 simple-2 + constant-1.05 (no reset) 23 -3756 simple-2 + constant-1.09 (no reset) -19 -2170 simple-2 + constant-1.13 (no reset) -16 -703 simple-2 + constant-1.14 (no reset) -125 -319 Table 2: ESBAS pseudo-regret after 12 epochs (i.e. 40,920 trajectories) compared with the best and the worst algorithms in the portfolio, in function of the algorithms in the portfolio (described in the first column). The'+' character is used to separate the algorithms. all-4 means all the four learning algorithms described in Section C.1.2 simple + fast + simple-2 + fast-2. all-4-n-1 means the same four algorithms with one additional feature of noise. Finally, all-2-simple means simple + simple-2 and all-2-n-1-simple means n-1-simple + n-1-simple-2. On the second column, the redder the colour, the worse ESBAS is achieving in comparison with the best algorithm. Inversely, the greener the colour of the number, the better ESBAS is achieving in comparison with the best algorithm. If the number is neither red nor green, it means that the difference between the portfolio and the best algorithm is insignificant and that they are performing as good. This is already an achievement for ESBAS to be as good as the best. On the third column, the bluer the cell, the weaker is the worst algorithm in the portfolio. One can notice that positive regrets are always triggered by a very weak worst algorithm in the portfolio. In these cases, ESBAS did not allow to outperform the best algorithm in the portfolio, but it can still be credited with the fact it dismissed efficiently the very weak algorithms in the portfolio. Theorem 1 (Not worse than the worst). The absolute pseudo-regret is bounded by the worst algorithm absolute pseudo-regret in order of magnitude: DISPLAYFORM0 Proof. From Definition 1: DISPLAYFORM1 where sub α (D) is the subset of D with all the trajectories generated with algorithm α, where τ α i is the index of the i th trajectory generated with algorithm α, and where |S| is the cardinality of finite set S. By convention, let us state that Eµ DISPLAYFORM2 To conclude, let us prove by mathematical induction the following inequality: DISPLAYFORM3 is true by vacuity for i = 0: both left and right terms equal Eµ α ∅. Now let us assume the property true for i and prove it for i + 1: DISPLAYFORM4.If |sub α (D σ T)|≥ i + 1, by applying mathematical induction assumption, then by applying Assumption 2 and finally by applying Assumption 1 recursively, we infer that: The mathematical induction proof is complete. This leads to the following inequalities: DISPLAYFORM5 DISPLAYFORM6 which leads directly to the : DISPLAYFORM7 This proof may seem to the reader rather complex for such an intuitive and loose but algorithm selection σ and the algorithms it selects may act tricky. For instance selecting algorithm α only when the collected trajectory sets contains misleading examples (i.e. with worse expected return than with an empty trajectory set) implies that the following unintuitive inequality is always true: Eµ DISPLAYFORM8. In order to control all the possible outcomes, one needs to translate the selections of algorithm α into σ α's view. Theorem 2 (ESBAS short-sighted pseudo-regret). If the stochastic multi-armed bandit Ξ guarantees a regret of order of magnitude O(log(T)/∆ † β ), then: DISPLAYFORM0 Proof. By simplification of notation, Eµ DISPLAYFORM1. From Definition 2: Since we are interested in the order of magnitude, we can once again only consider the upper bound of DISPLAYFORM2 DISPLAYFORM3 Theorem 3 (ESBAS absolute pseudo-regret upper bound). Under assumption 3, if the stochastic multi-armed bandit Ξ guarantees that the best arm has been selected in the T first episodes at least T /K times, with high probability 1 − δ T, with δ T ∈ O(1/T), then: DISPLAYFORM0 where meta-algorithm σ * selects exclusively algorithm α * = argmin α∈P ρ σ α abs (T).Proof. The ESBAS absolute pseudo-regret is written with the following notation simplifications: DISPLAYFORM1 and k τ = σ ESBAS (τ): Note that σ * is the optimal constant algorithm selection at horizon T, but it is not necessarily the optimal algorithm selection: there might exist, and there probably exists a non constant algorithm selection yielding a smaller pseudo-regret. DISPLAYFORM2 The ESBAS absolute pseudo-regret ρ σ ESBAS abs (T) can be decomposed into the pseudo-regret for not having followed the optimal constant algorithm selection σ * and the pseudo-regret for not having selected the algorithm with the highest return, i.e. between the pseudo-regret on the trajectory and the pseudo-regret on the immediate optimal return:. is to evaluate the size of sub * (D τ −1).On the one side, Assumption 3 of fairness states that one algorithm learns as fast as any another over any history. The asymptotically optimal algorithm(s) when τ → ∞ is(are) therefore the same one(s) whatever the the algorithm selection is. On the other side, let 1 − δ τ denote the probability, that at time τ, the following inequality is true: DISPLAYFORM0 With probability δ τ, inequality 34 is not guaranteed and nothing can be inferred about Eµ * sub * (Dτ−1), except it is bounded under by R min /(1 − γ). Let E DISPLAYFORM1 Let consider E * (α, N) the set of all sets D such that |sub α (D)|= N and such that last trajectory in D was generated by α. Since ESBAS, with Ξ, a stochastic bandit with regret in O(log(T)/∆), guarantees that all algorithms will eventually be selected an infinity of times, we know that: ∀α ∈ P, ∀N ∈ N, D ∈E + (α,N) P(D|σ ESBAS) = 1.By applying recursively Assumption 2, one demonstrates that: DISPLAYFORM2 D ∈E + (α,N) DISPLAYFORM3 One also notices the following piece-wisely domination from applying recursively Assumption 1: DISPLAYFORM4 (1 − δ τ)Eµ * ∞ − D ∈E; the meta-time spent on epoch β τ − 1 is equal to 2 βτ −1; the meta-time spent on epoch β is either below 2 βτ −1, in which case, the meta-time spent on epoch β τ − 1 is higher than τ 3, or the meta-time spent on epoch β is over 2 βτ −1 and therefore higher than τ 3. Thus, ESBAS is guaranteed to try the best algorithm α * at least τ /3K times with high probability 1 − δ τ and δ τ ∈ O(τ −1). As a : DISPLAYFORM5 with κ = κ 3 Eµ * ∞ − R min 1 − γ, which proves the theorem. | This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning. | 1,282 | scitldr |
In this paper, we present a new generative model for learning latent embeddings. Compared to the classical generative process, where each observed data point is generated from an individual latent variable, our approach assumes a global latent variable to generate the whole set of observed data points. We then propose a learning objective that is derived as an approximation to a lower bound to the data log likelihood, leading to our algorithm, WiSE-ALE. Compared to the standard ELBO objective, where the variational posterior for each data point is encouraged to match the prior distribution, the WiSE-ALE objective matches the averaged posterior, over all samples, with the prior, allowing the sample-wise posterior distributions to have a wider range of acceptable embedding mean and variance and leading to better reconstruction quality in the auto-encoding process. Through various examples and comparison to other state-of-the-art VAE models, we demonstrate that WiSE-ALE has excellent information embedding properties, whilst still retaining the ability to learn a smooth, compact representation. Unsupervised learning is a central task in machine learning. Its objective can be informally described as learning a representation of some observed forms of information in a way that the representation summarizes the overall statistical regularities of the data BID0. Deep generative models are a popular choice for unsupervised learning, as they marry deep learning with probabilistic models to estimate a joint probability between high dimensional input variables x and unobserved latent variables z. Early successes of deep generative models came from Restricted Boltzmann Machines BID7 and Deep Boltzmann Machines BID15, which aim to learn a compact representation of data. However, the fully stochastic nature of the network requires layer-by-layer pre-training using MCMC-based sampling algorithms, ing in heavy computation cost. BID9 consider the objective of optimizing the parameters in an auto-encoder network by deriving an analytic solution to a variational lower bound of the log likelihood of the data, leading to the Auto-Encoding Variational Bayes (AEVB) algorithm. They apply a reparameterization trick to maximally utilize deterministic mappings in the network, significantly simplifying the training procedure and reducing instability. Furthermore, a regularization term naturally occurs in their model, allowing a prior p(z) to be placed over every sample embedding q(z|x). As a , the learned representation becomes compact and smooth; see e.g. FIG0 where we learn a 2D embedding of MNIST digits using 4 different methods and visualize the aggregate posterior distribution of 64 random samples in the learnt 2D embedding space. However, because the choice of the prior is often uninformative, the smoothness constraint imposed by this regularization term can cause information loss between the input samples and the latent embeddings, as shown by the merging of individual embedding distributions in FIG0 (d) (especially in the outer areas away from zero code). Extreme effects of such behaviours can be noticed from β-VAE BID6, a derivative algorithm of AEVB which further increases the weighting on the regularizing term with the aim of learning an even smoother, disentangled representation of the data. As shown in FIG0 (e), the individual embedding distributions are almost indistinguishable, leading to an overly severe information bottleneck which can cause high rates of distortion BID16. In contrast, perfect reconstruction can be achieved using WAE , but the learnt embedding distributions appear to severely non-smooth (FIG0), indicating a small amount of noise in the latent space would cause generation process to fail. In this paper, we propose WiSE-ALE (a wide sample estimator), which imposes a prior on the bulk statistics of a mini-batch of latent embeddings. Learning under our WiSE-ALE objective does not penalize individual embeddings lying away from the zero code, so long as the aggregate distribution (the average of all individual embedding distributions) does not violate the prior significantly. Hence, our approach mitigates the distortion caused by the current form of the prior constraint in the AEVB objective. Furthermore, the objective of our WiSE-ALE algorithm is derived by applying variational inference in a simple latent variable model (Section 2) and with further approximation, we derive an analytic form of the learning objective, ing in efficient learning algorithm. In general, the latent representation learned using our algorithm enjoys the following properties: 1) smoothness, as indicated in FIG0, the probability density for each individual embedding distribution decays smoothly from the peak value; 2) compactness, as individual embeddings tend to occupy a maximal local area in the latent space with minimal gaps in between; and 3) separation, indicated by the narrow, but clear borders between neighbouring embedding distributions as opposed to the merging seen in AEVB. In summary, our contributions are:• proposing a new latent variable model that uses a single global latent variable to generate the entire dataset,• deriving a variational lower bound to the data log likelihood in our latent variable model, which allows us to impose prior constraint on the bulk statistics of a mini-batch embedding distributions,• and deriving analytic approximations to the lower bound, leading to our efficient WiSE-ALE learning algorithm. In the rest of the paper, we first review directed graphical models in Section 2. We then derive our variational lower bound and its analytic approximations in Section 3. Related work is discussed in Section 4. Experiment are analyzed in Section 5, leading to in Section 6. Here we introduce the latent variable model used in our WiSE-ALE algorithm and compare with the latent variable model used in the AEVB algorithm BID9. DISPLAYFORM0, we assume x is generated from a latent variable z ∈ R dz of a much lower dimension. Here we denote x and z as random variables, DISPLAYFORM1 as the i-th input or latent code sample (i.e. a vector), and x i and z i as the random variable for x (i) and z (i). As shown in FIG1, this generative process can be modelled by a simple directed graphical model BID8, which models the joint probability distribution DISPLAYFORM2 is the data distribution for D N and p θ (x|z) and p θ (z|x) denote the complex transformation from the latent to the input space and reverse, where the transformation mapping is parameterised by θ. The learning task is to estimate the optimal set of θ so that this latent variable model can explain the data D N well. As the inference of the latent variable z given x (i.e. p θ (z|x)) cannot be directly estimated because p(x|D N) is unknown, both AEVB (FIG1) and our WiSE-ALE FIG1 ) resort to variational method to approximate the target distribution p θ (z|x) by a proposal distribution q φ (z|x) with the modified learning objective that both θ and φ are optimised so that the model can explain the data well and q φ (z|x) approaches p θ (z|x). The primary difference between the AEVB model and our WiSE-ALE model lies in how the joint probability p θ (x, z|D N) is modelled and specifically whether we assume an individual random variable for each latent code z (i). The AEVB model assumes a pair of random variables (x i, z i) for each x (i) and estimates the joint probability as DISPLAYFORM3 The equality between Eq. 2 and Eq. 3 can only be made with the assumption that the generation process for each x i is independent (first product in Eq. 3) and each z i is also independent (second product in Eq. 3). Such interpretation of the joint probability leads to the latent variable model in FIG1 (b) and the prior constraint (often taken as N (0, I) to encourage shrinkage when no data is observed) is imposed on every z i.In contrast, our WiSE-ALE model takes a single random variable to estimate the latent distribution for the entire dataset D N. Hence, the joint probability in our model can be broken down as DISPLAYFORM4 leading to the latent variable model illustrated in FIG1. The only assumption we make in our model is assuming the generative process of different input samples given the latent distribution of the current dataset as independent, which we consider as a sensible assumption. More significantly, we do not require independence between different z i as opposed to the AEVB model, leading to a more flexible model. Furthermore, the prior constraint in our model is naturally imposed on the aggregate posterior p(z|D N) for the entire dataset, leading to more flexibility for each individual sample latent code to shape an embedding distribution to preserve a better quality of information about the corresponding input sample. Neural networks can be used to parameterize p θ (x i |z i) in the generative model and q φ (z i |x i) in the inference model from the AEVB latent variable model or p θ (x i |z) and q φ (z|x i) correspondingly from our WiSE-ALE latent variable model. Both networks can be implemented through an auto-encoder network illustrated in FIG1 (d). In this section, we first define the aggregate posterior distribution p(z|D N) which serves as a core concept in our WiSE-ALE proposal. We then derive a variational lower bound to the data log likelihood log p(D N) using p(z|D N). Further, analytic approximation to the lower bound is derived, allowing efficient optimization of the model parameters and leading to our WiSE-ALE learning algorithm. Intuition of our proposal is also discussed. Here we formally define the aggregate posterior distribution p(z|D N), i.e. the latent distribution given the entire dataset D N. Considering DISPLAYFORM0 we have the aggregate posterior distribution for the entire dataset as the average of all the individual sample posteriors. The second equality in Eq. 9 is made by approximating the integral through summation. The third equality is obtained following the conventional assumption in the VAE literature that each input sample, DISPLAYFORM1, is drawn from the dataset D N with equal probability, i.e. DISPLAYFORM2 Similarly, for the estimated aggregate posterior distribution q(z|D N), we have DISPLAYFORM3 To carry out variational inference, we minimize the KL divergence between the estimated and the true aggregate posterior distributions q φ (z|D N) and p θ (z|D N), i.e. DISPLAYFORM0 in Eq. 11 and breaking down the products and fractions inside the log, we have DISPLAYFORM1 Re-arranging the above equation, we have DISPLAYFORM2 There are two terms in the derived lower bound: 1 a reconstruction likelihood term that indicates how likely the current dataset D N are generated by the aggregate latent posterior distribution q φ (z|D N) and 2 a prior constraint that penalizes severe deviation of the aggregate latent posterior distribution q φ (z|D N) from the preferred prior p(z), acting naturally as a regularizer. By maximizing the lower bound L WiSE-ALE (φ, θ; D N) defined in Eq. 12, we are approaching to log p(D N) and, hence, obtaining a set of parameters θ and φ that find a natural balance between a good reconstruction likelihood (good explanation of the observed data) and a reasonable level of compliance to the prior assumption (achieving some preferable properties of the posterior distribution, such as smoothness and compactness). To allow fast and efficient optimization of the model parameters θ and φ, we derive analytic approximations for the two terms in our proposed lower bound (Eq. 12). To approximate 1 reconstruction likelihood term in Eq. 12, we first substitute the definition of the approximate aggregate posterior given in Eq. 10 in the expectation operation in DISPLAYFORM0 Now we can decompose the p θ (D N |z) as a product of individual sample likelihood, due to the conditional independence, i.e. DISPLAYFORM1 Substituting this into Eq. 13, we have DISPLAYFORM2 Eq. 15 can be used to evaluate the reconstruction likelihood for D N. However, learning directly with this reconstruction estimate does not lead to convergence in our experiments. We choose to simplify the reconstruction likelihood further to be able to reach convergence during learning at the cost of losing the lower bound property of the objective function L WiSE-ALE (φ, θ; D N). Firstly, we apply Jensen inequality to the term inside the expectation in Eq. 15, leading to an upper bound of the reconstruction likelihood term as DISPLAYFORM3 Now (N − 1) sample-wise likelihood distributions in the summation inside the log can be dropped with the assumption that the p θ (x (j) |z) will only be non-zero if z is sampled from the posterior distribution of the same sample x (j) at the encoder, i.e. i = j. Therefore, the approximation becomes DISPLAYFORM4 Using the approximation of the reconstruction likelihood term given by Eq. 17 rather than Eq. 15, we are able to reach convergence efficiently during learning at the cost of the estimated objective no longer remaining a lower bound to log p(D N). Details of deriving the above approximation are given in Appendix A. The 2 prior constraint term D KL q φ (z|D N) p(z) in our objective function (Eq. 12) evaluates the KL divergence between the approximate aggregate posterior distribution q φ (z|D N) and a zero-mean, unit-variance Gaussian distribution p(z). Here we assume that each sample-wise posterior distribution can be modelled by a factorial Gaussian distribution, i.e. q φ (z|x DISPLAYFORM0, where k indicates the k-th dimension of the latent variable z and µ k (x (i) ) and σ 2 k (x (i) ) are the mean and variance of the k-th dimension embedding distribution for the input x (i). Therefore, D KL q φ (z|D N) p(z) computes the KL divergence between a mixture of Gaussians (as Eq. 10) and N (0, I). There is no analytical solution for such KL divergences. Hence, we derive an analytic upper bound allowing for efficient evaluation. DISPLAYFORM1 Applying Jensen inequality, i.e. E x log f (x) ≤ log E x f (x), to the first term inside the summation in Eq. 18, we have DISPLAYFORM2 Taking advantage of the Gaussian assumption for q φ (z|x (i) ) and p(z), we can compute the expectations in Eq. 20 analytically with the quoted below and the full derivation given in Appendix B. DISPLAYFORM3 where A = 2π (σ DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 When the overall objective function L WiSE-ALE (φ, θ; D N) in Eq. 12 is maximised, this upper bound approximation will approach the true KL divergence D KL q φ (z|D N) p(z), which ensures that the prior constraint on the overall aggregate posterior distribution takes effects. Combining from Section 3.3.1 and 3.3.2, we obtain an analytic approximation L WiSE-ALE approx DISPLAYFORM0 Eq. 12, as -4 -3 -2 -1 0 1 2 3 4 5 -4 -3 -2 -1 0 1 2 3 4 5 Ours: Figure 3: Comparison between our WiSE-ALE learning scheme and the AEVB estimator. AEVB imposes the prior constraint on every sample embedding distribution, whereas our WiSE-ALE imposes the constraint to the overall aggregate embedding distribution over the entire dataset (over a mini-batch as an approximation for efficient learning).shown below: DISPLAYFORM0 where we use L φ, θ | x DISPLAYFORM1 to denote the sample-wise reconstruction likelihood (φ, θ; D N) w.r.t the model parameters φ and θ, we are able to learn a model that naturally balances between a good embedding of the observed data and some preferred properties of the latent embedding distributions, such as smoothness and compactness. DISPLAYFORM2 Comparing the objective function in our WiSE-ALE algorithm and that proposed in AEVB algorithm BID9 DISPLAYFORM0 we notice that the difference lies in the form of prior constraint and the difference is illustrated in Fig. 3. AEVB learning algorithm imposes the prior constraint on every sample embedding distribution and any deviation away from the zero code or the unit variance will incur penalty. This will cause problems, as different samples cannot be simultaneously embedded to the zero code. Furthermore, when the model becomes more certain about the embedding of a specific sample as the learning continues, it will naturally favour a posterior distribution of small variance (e.g. less than 1). In contrast, our WiSE-ALE learning objective imposes the prior constraint on the aggregate posterior distribution, i.e. the average of all the sample embeddings. Such prior constraint will allow more flexibility for each sample posterior to settle at a mean and variance value in favour for good reconstruction quality, while preventing too large mean values (acting as a regulariser) or too small variance values (ensuring smoothness of the learnt latent representation).To investigate the different behaviours of the two prior constraints more concretely, we consider only two embedding distributions q(z|x ) and q(z|x ) (red dashed lines) in a 1D latent space, as shown in FIG2. The mean values of the two embedding distributions are fixed to make the analysis simple and their variances are allowed to change. When the variances of the two embedding distributions into the latent space (the more separable q(z|x ) and q(z|x ) are in the latent space, the easier it is to distinguish x are large, such as FIG2, q(z|x ) and q(z|x ) have a large area of overlap and it is difficult to distinguish the input samples x in the latent space, indicating the embedding only introduces a small level of information loss. Overall, the prior constraint in the AEVB objective favours the embedding distributions much closer to the uninformative N (0, I) prior, leading to large area of overlap between the individual posteriors, whereas our WiSE-ALE objective allows a wide range of acceptable embedding mean and variance, which will then offer more flexibility in the learnt posteriors to maintain a good reconstruction quality. So far our derivation has been for the entire dataset D N. Given a small subset B M with M samples randomly drawn from D N, we can obtain a variational lower bound for a mini-batch as: DISPLAYFORM0 When B M is reasonably large, then DISPLAYFORM1 Given the expressions for the objective functions derived in Section 3.3, we can compute the gradient for an approximation to the lower bound of a mini-batch B M and apply stochastic gradient ascent algorithm to iteratively optimize the parameters φ and θ. We can thus apply our WiSE-ALE algorithm efficiently to a mini-batch and learn a meaningful internal representation of the entire dataset. Algorithmically, WiSE-ALE is similar to AEVB, save for an alternate objective function as per Section 3.3.3. The procedural details of the algorithm are presented in Appendix C.4 RELATED WORK BID1 proposes that a learned representation of data should exhibit some generally preferable features, such as smoothness, sparsity and simplicity. However, these attributes are not tailored to any specific downstream tasks. Bayesian decision making (see e.g. ; BID4) requires consideration of a target task and proposes that any involved latent distribution approximations should be optimised for the performance over the target task, as well as conforming to the more general properties. The AEVB algorithm BID9 learns the latent posterior distribution under a reconstruction task, while simultaneously satisfying a prior constraint, which ensures the representation is smooth and compact. However, the prior constraint of the AEVB algorithm imposes significant influence on the solution space (as discussed in Section 3.4), and leads to a sacrifice of reconstruction quality. Our WiSE-ALE algorithm, however, prioritises the reconstruction task yet still enables globally desirable properties. WiSE-ALE is not the only algorithm that considers an alternate prior form to mitigate its impact on the reconstruction quality. The Gaussian Mixture VAE BID5 uses a Gaussian mixture model to parameterise p(z), encouraging more flexible sample posteriors. The Adversarial Auto-Encoder BID13 ) matches the aggregate posterior over the latent variables with a prior distribution through adversarial training. The WAE minimises a penalised form of the Wasserstein distance between the aggregate posterior distribution and the prior, claiming a generalisation of the AAE algorithm under the theory of optimal transport . More recently, the Sinkhorn Auto-Encoder BID14 ) builds a formal analysis of auto-encoders using an optimal transport based prior and uses the Sinkhorn algorithm as an alternative to estimate the Wasserstein distance in WAE.Our work differs from these in two main aspects. Firstly, our objective function can be evaluated analytically, leading to an efficient optimization process. In many of the above work, the optimization involves adversarial training and some hyper-parameter tuning, which leading to less efficient learning or even no convergence. Secondly, our WiSE-ALE algorithm naturally finds a balance between good reconstruction quality and preferred latent representation properties, such as smoothness and compactness, as shown in FIG0. In contrast, some other work sacrifice the properties of smoothness and compactness severely for improved reconstruction quality, as shown in FIG0. Many works have indicated that those properties of the learnt latent representation are essential for tasks that require optimisation over the latent space. We evaluate our WiSE-ALE algorithm in comparison with AEVB, β-VAE and WAE on the following 3 datasets. The implementation details for all experiments are given in Appendix E.1. Sine Wave. We generated 200,000 sine waves with small random noise: x(t) = A sin(2πf t + ϕ) +, each containing 256 samples, with independently sampled frequency f ∼ Unif(0, 20Hz), phase angle ϕ ∼ Unif(0, 2π) and amplitude A ∼ Unif. 2. MNIST . 70,000 28 × 28 binary images that contain hand-written digits.3. CelebA BID12. 202,599 RGB images of aligned celebrity faces of 218 × 178 are cropped to square images of 178 × 178 and resized to 64 × 64. Throughout all experiments, our method has shown consistently superior reconstruction quality compared to AEVB, β-VAE and WAE. FIG6 offers a graphical comparison across the reconstructed samples given by different methods for the sine wave and CelebA datasets. For the sine wave dataset, our WiSE-ALE algorithms achieves almost perfect reconstruction, whereas AEVB and β-VAE often struggle with low-frequency signals and have difficulty predicting the amplitude correctly. For the CelebA dataset, our WiSE-ALE manages to predict much sharper human faces, whereas the AEVB predictions are often blurry and personal characteristics are often ignored. WAE reaches a similar level of reconstruction quality to ours in some images, but it sometimes struggles with discovering the right elevation and azimuth angles, as shown in the second to the right column in FIG6. We understand that a good latent representation should not only reconstruct well, but also preserve some preferable qualities, such as smoothness, compactness and possibly meaningful interpretation of the original data. FIG0 indicates that our WiSE-ALE automatically learns a latent representation that finds a good tradeoff between minimizing the information loss and maintaining a smooth and compact aggregate posterior distribution. Furthermore, as shown in FIG7, we compare the ELBO values given by AEVB, β-VAE and our WiSE-ALE over training for the Sine dataset. Our WiSE-ALE manages to report the highest ELBO with a significantly lower reconstruction error and a fairly good performance in the KL divergence loss. This indicates that our WiSE-ALE is able to learn an overall good quality representation that is closest to the true latent distribution which gives rise to the data observation. In this paper, we propose a new latent variable model where a global latent variable is used to generate the entire dataset. We then derive a variational lower bound to the data log likelihood, which allows us to impose a prior constraint on the bulk statistics of the aggregate posterior distribution for the entire dataset. Using an analytic approximation to this lower bound as our learning objective, we propose WiSE-ALE algorithm. We have demonstrated its ability to achieve excellent reconstruction quality, as well as forming a smooth, compact and meaningful latent representation. In the future, we would like to understand the properties of the latent embeddings learnt through our method and apply it for suitable applications. In this appendix, we omit the trainable parameters φ and θ in the expressions of distributions for simplicity. For example, q(z|x) is equivalent to q φ (z|x) and p(x|z) represents p θ (x|z). Here we demonstration that the reconstruction term E q(z|D N) log p(D N |z) in our lower bound can be computed with individual sample likelihood log p(x (i) |z) and how our reconstruction error term becomes the same as the reconstruction term in the AEVB objective. Firstly, we can substitute DISPLAYFORM0 into the reconstruction term DISPLAYFORM1 Now we can decompose the the marginal likelihood of the entire dataset as a product of individual samples, due to the conditional independence, i.e. DISPLAYFORM2 Substituting this into the reconstruction term, we have: DISPLAYFORM3 To evaluate the reconstruction term in our lower bound, we need to do the following: 1) draw a sample x (i) from the dataset D N; 2) evaluate the latent code distribution q(z|x (i) ) through the encoder function q(·|x (i) ); 3) draw samples of z according to q(z|x (i) ); 4) reconstruct input samples using the sampled latent codes z (l); 5) compute the reconstruction error w.r.t to every single input sample and sum this error. We can simplify the above evaluation. Firstly, the sampling process in Step 3 can be replaced to a sampling process at the input using the reparameterisation trick. Besides, the sum of reconstruction errors w.r.t. all the input samples can be further simplified. To do this, we need to re-arrange the above expression as DISPLAYFORM4 log a i to the terms inside the expectation. As a , we have obtain an upper bound of the reconstruction error term as DISPLAYFORM5 This upper bound can be evaluated more efficiently with the assumption that the likelihood p(x (j) |z) representing the probability of a reconstructed sample from a latent code z imitating the sample x (j) will only be non-zero if z is sampled from the embedding prediction distribution with the same sample x (j) at the encoder input. With this assumption, N − 1 posterior distributions in the inner summation will be dropped as zeros and the only non-zero term is p(x (i) |z). Therefore, the upper bound becomes DISPLAYFORM6 The constant can be omitted, because it will not affect the gradient updates of the parameters. Applying Jensen inequality, i.e. DISPLAYFORM0 to the first term of above equation, we have DISPLAYFORM1 We will look at the two summation individually. The expectation w.r.t. the aggregate posterior can be expanded as DISPLAYFORM2 We assume the posterior distribution of the latent code z given a specific input sample DISPLAYFORM3 Similarly, DISPLAYFORM4 Therefore, DISPLAYFORM5 Substituting the exponential form for Gaussian distribution, i.e. DISPLAYFORM6 to the above equation, we have DISPLAYFORM7 The exponent of the above equation can be simplified to DISPLAYFORM8 Using the following properties, i.e. DISPLAYFORM9 we can evaluate the integral needed for DISPLAYFORM10 Therefore, we have obtained the expression for the first term in our upper bound, i.e. DISPLAYFORM11 DISPLAYFORM12 To find out the expression for the second term DISPLAYFORM13, we first examine the prior distribution p(z) which is chosen to be a zero-mean unit-variance Gaussian across all latent code dimensions, i.e. DISPLAYFORM14 Therefore, DISPLAYFORM15 Substituting this expression for log p(z) into DISPLAYFORM16 ) log p(z) and examining the expectation term for now, we have DISPLAYFORM17 The first integral q(z k |x (i) ) dz k = 1. To evaluate the second integral, we substitute Equation and use the following properties, i.e. DISPLAYFORM18 As a , we have DISPLAYFORM19 Therefore, DISPLAYFORM20 Combining the first term defined in Equation and the second term defined in Equation FORMULA12, we have obtained the expression for the overall upper bound as DISPLAYFORM21 + log 2π. We carry out experiments on four datasets (Sine wave, MNIST, Teapot and CelebA) to examine different properties of the latent representation learnt from the proposed WiSE algorithm. Specifically, we compare with β-VAE on the smoothness and disentanglement of the learnt representation and compare with WAE and AEVB on the reconstruction quality. In addition, by learning a 2D embedding of the MNIST dataset, we are able to visualise the latent embedding distributions learnt from AEVB, β-VAE, WAE and our WiSE and compare the compactness and smoothness of the learnt latent space across these methods. Here we give the implementation details for each dataset. We aim to learn a latent representation in R 4 for a one second long sine wave with sampling rate of 256Hz. The network architecture for the Sine wave dataset is shown below. x is an input sample, µ and σ are the latent code mean and latent code standard deviation to define the embedding distribution q(z|x) andx is the reconstructed input sample. is an auxiliary variable drawn from unit Gaussian at the input of the encoder network so that an estimate of a sample from the embedding distribution q(z|x) can be computed. Conv We aim to learn a 2D embedding of the MNIST dataset. The network architecture is shown below. Encoder network:x ∈ R 28×28×1 → Conv Decoder network: DISPLAYFORM0 ⇒x ∈ R We use the following hyper-parameters to train the network: | We propose a new latent variable model to learn latent embeddings for some high-dimensional data. | 1,283 | scitldr |
We improve the robustness of deep neural nets to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation function remarkably improves both classification accuracy and stability to adversarial perturbations. Together with the total variation minimization of adversarial images and augmented training, under the strongest attack, we achieve up to 20.6%, 50.7%, and 68.7% accuracy improvement w.r.t. the fast gradient sign method, iterative fast gradient sign method, and Carlini-WagnerL2attacks, respectively. Our defense strategy is additive to many of the existing methods. We give an intuitive explanation of our defense strategy via analyzing the geometry of the feature space. For reproducibility, the code will be available on GitHub. The adversarial vulnerability BID26 of deep neural nets (DNNs) threatens their applicability in security critical tasks, e.g., autonomous cars BID0, robotics BID8, DNN-based malware detection systems BID20 BID7. Since the pioneering work by BID26, many advanced adversarial attack schemes have been devised to generate imperceptible perturbations to sufficiently fool the DNNs BID6 BID19 BID5 BID29 BID11 BID2. And not only are adversarial attacks successful in white-box attacks, i.e. when the adversary has access to the DNN parameters, but they are also successful in black-box attacks, i.e. it has no access to the parameters. Black-box attacks are successful because one can perturb an image so it misclassifies on one DNN, and the same perturbed image also has a significant chance to be misclassified by another DNN; this is known as transferability of adversarial examples BID22 ). Due to this transferability, it is very easy to attack neural nets in a blackbox fashion BID14 BID4. In fact, there exist universal perturbations that can imperceptibly perturb any image and cause misclassification for any given network . There is much recent research on designing advanced adversarial attacks and defending against adversarial perturbation. In this work, we propose to defend against adversarial attacks by changing the DNNs' output activation function to a manifold-interpolating function, in order to seamlessly utilize the training data's information when performing inference. Together with the total variation minimization (TVM) and augmented training, we show state-of-the-art defense on the CIFAR-10 benchmark. Moreover, we show that adversarial images generated from attacking the DNNs with an interpolating function are more transferable to other DNNs, than those ing from attacking standard DNNs. Defensive distillation was recently proposed to increase the stability of DNNs which dramatically reduces the success rate of adversarial attacks BID21, and a related approach BID27 ) cleverly modifies the training data to increase robustness against black-box attacks, and adversarial attacks in general. To counter the adversarial perturbations, BID9 proposed to use image transformation, e.g., bit-depth reduction, JPEG compression, TVM, and image quilting. A similar idea of denoising the input was later explored by BID17, where they divide the input into patches, denoise each patch, and then reconstruct the image. These input transformations are intended to be non-differentiable, thus making adversarial attacks more difficult, especially for gradient-based attacks. BID25 noticed that small adversarial perturbations shift the distribution of adversarial images far from the distribution of clean images. Therefore they proposed to purify the adversarial images by PixelDefend. Adversarial training is another family of defense methods to improve the stability of DNNs BID6 BID15 BID18. And GANs are also employed for adversarial defense BID24. In BID1, the authors proposed a straight-through estimation of the gradient to attack the defense methods that is based on the obfuscated gradient. Meanwhile, many advanced attack methods have been proposed to attack the DNNs BID29 BID11.Instead of using softmax functions as the DNNs' output activation, BID28 utilized a class of non-parametric interpolating functions. This is a combination of both deep and manifold learning which causes the DNNs to sufficiently utilize the geometric information of the training data. The authors show a significant amount of generalization accuracy improvement, and the are more stable when one only has a limited amount of training data. In this section, we summarize the architecture, training, and testing procedures of the DNNs with the data-dependent activation BID28 ). An overview of training and testing of the standard DNNs with softmax output activation is shown in FIG0 and (b), respectively. In the kth iteration of training, given a mini-batch of training data X, Y, the procedure is:Forward propagation: Transform X into features by a DNN block (ensemble of convolutional layers, nonlinearities and others), and then pass this output through the softmax activation to obtain the predictionsỸ: DISPLAYFORM0 Then the loss is computed (e.g., cross entropy) between Y andỸ: L = Loss(Y,Ỹ).Backpropagation: Update weights (Θ k−1, W k−1) by gradient descent (learning rate γ): DISPLAYFORM1 Once the model is optimized, the predicted labels for testing data X are: BID28 proposed to replace the data-agnostic softmax activation by a data-dependent interpolating function, defined in the next section. Under review as a conference paper at ICLR 2019 DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 te m } be a subset of X which are labeled with label function g(x). We want to interpolate a function u that is defined on the entire manifold and can be used to label the entire dataset X. The harmonic extension is a natural and elegant approach to find such an interpolating function, which is defined by minimizing the Dirichlet energy functional: DISPLAYFORM5 with the boundary condition: DISPLAYFORM6 where w(x, y) is a weight function, typically chosen to be Gaussian: DISPLAYFORM7 ) with σ being a scaling parameter. The Euler-Lagrange equation for Eq. FORMULA5 is: DISPLAYFORM8 By solving the linear system (Eq. FORMULA8), we obtain labels u(x) for unlabeled data x ∈ X/X te. This interpolation becomes invalid when the labeled data is tiny, i.e., |X te | |X/X te |. To resolve this issue, the weights of the labeled data is increased in the Euler-Lagrange equation, which gives: DISPLAYFORM9 The solution u(x) to Eq. is named weighted nonlocal Laplacian (WNLL), denoted as DISPLAYFORM10 is the one-hot labels for the example x. In both training and testing of the WNLL-activated DNNs, we need to reserve a small portion of data/label pairs, denoted as (X te, Y te), to interpolate the label for new data Y. We name the reserved data (X te, Y te) as the template. Directly replacing softmax by WNLL has difficulties in back propagation, namely the true gradient ∂L ∂Θ is difficult to compute since WNLL defines a very complex implicit function. Instead, to train WNLL-activated DNNs, a proxy via an auxiliary neural net (FIG0) is employed. On top of the original DNNs, we add a buffer block (a fully connected layer followed by a ReLU), and followed by two parallel branches, WNLL and the linear (fully connected) layers. The auxiliary DNNs can be trained by alternating between training DNNs with linear and WNLL activations, respectively. The training loss of the WNLL activation function is backpropped via a straight-through estimation approach BID1 BID3. At test time, we remove the linear classifier from the neural nets and use the DNN and buffer blocks together with WNLL to predict new data (FIG0 ; here for simplicity, we merge the buffer block to the DNN block. For a given set of testing data X, and the labeled template {(X te, Y te)}, the predicted labels for X is given bỹ DISPLAYFORM0 We consider three benchmark attack methods in this work, namely, the fast gradient sign method (FGSM) BID6, iterative FGSM (IFGSM) BID13, and CarliniWagner's L 2 (CW-L2) BID5 attacks. We denote the classifier defined by the DNNs with softmax activation asỹ = f (θ, x) for a given instance (x, y). FGSM finds the adversarial image x by maximizing the loss L(x, y), subject to the l ∞ perturbation ||x −x|| ∞ ≤ with as the attack strength. Under the first order approximation i.e., DISPLAYFORM0, the optimal perturbation is given by DISPLAYFORM1 IFGSM iterates FGSM to generate enhanced adversarial images, i.e., DISPLAYFORM2 where m = 1, · · ·, M, x = x and x = x (M), with M be the number of iterations. The CW-L2 attack is proposed to circumvent defensive distillation. For a given image-label pair (x, y), and ∀t = y, CW-L2 searches the adversarial image that will be classified to class t by solving the optimization problem: min DISPLAYFORM3 where δ is the adversarial perturbation (for simplicity, we ignore the dependence of θ in f).The equality constraint in Eq. FORMULA15 is hard to satisfy, so instead Carlini et al. consider the surrogate DISPLAYFORM4 where Z(x) is the logit vector for an input x, i.e., output of the neural net before the softmax layer. Z(x) i is the logit value corresponding to class i. It is easy to see that f (x + δ) = t is equivalent to g(x + δ) ≤ 0. Therefore, the problem in Eq. can be reformulated as DISPLAYFORM5 where c ≥ 0 is the Lagrangian multiplier. By letting δ = This unconstrained optimization problem can be solved efficiently by the Adam optimizer BID12. All three of the attacks clip the values of the adversarial image x to between 0 and 1. In this work, we focus on untargeted attacks and defend against them. For a given small batch of testing images (X, Y) and template (X te, Y te), we denote the DNNs modified with WNLL as output activation asỸ = WNLL(Z({X, X te}), Y te ), where Z({X, X te}) is the composition of the DNN and buffer blocks as shown in FIG0. By ignoring dependence of the loss function on the parameters, the loss function for DNNs with WNLL activation can be written asL(X, Y, X te, Y te). The above attacks for DNNs with WNLL activation on the batch of images, X, are formulated below. DISPLAYFORM0 • IFGSM DISPLAYFORM1 where m = 1, 2, · · ·, N; X = X and X = X (M).• CW-L2 DISPLAYFORM2 where i is the logit values of the input images X.Based on our numerical experiments, the batch size of X has minimal influence on the adversarial attack and defense. In all of our experiments we choose the batch size of X to be 500. Similar to BID28, we choose the size of the template to be 500.We apply the above attack methods to ResNet-56 with either softmax or WNLL as the output activation function. For IFGSM, we run 10 iterations of Eqs. FORMULA14 and FORMULA5 to attack DNNs with two different output activations, respectively. For CW-L2 attacks (Eqs.) in both scenarios, we set the parameters c = 10 and κ = 0. FIG2 depicts three randomly selected images (horse, automobile, airplane) from the CIFAR-10 dataset, their adversarial versions by different attack methods on ResNet-56 with two kinds of activation functions, and the TV minimized images. All attacks successfully fool the classifiers to classify any of them correctly. FIG2 shows that FGSM and IFGSM with perturbation = 0.02 changes the contrast of the images, while it is still easy for humans to correctly classify them. The adversarial images of the CW-L2 attacks are imperceptible, however they are extremely strong in fooling DNNs. FIG2 shows the images of (a) with a stronger attack, = 0.08. With a larger, the adversarial images become more noisy. The TV minimized images of FIG2 (a) and (b) are shown in FIG2 and FORMULA3, respectively. The TVM removes a significant amount of detailed information from the original and adversarial images, meanwhile it also makes it harder for humans to classify both the TV-minimized version of the original and adversarial images. Visually, it is hard to discern the adversarial images ing from attacking the DNNs with two types of output layers. We consider the geometry of features of the original and adversarial images. We randomly select 1000 training and 100 testing images from the airplane and automobile classes, respectively. We consider two visualization strategies for ResNet-56 with softmax activation: extract the original 64D features output from the layer before the softmax, and apply the principle component analysis (PCA) to reduce them to 2D. However, the principle components (PCs) do not encode the entire geometric information of the features. Alternatively, we add a 2 by 2 fully connected (FC) layer before the softmax, then utilize the 2D features output from this newly added layer. We verify that the newly added layer does not change the performance of ResNet-56 as shown in FIG3, and that the training and testing performance remains essentially the same for these two cases. Figure 4 (a) and (b) show the 2D features generated by ResNet-56 with additional FC layer for the original and adversarial testing images, respectively, where we generate the adversarial images by using FGSM (= 0.02). Before adversarial perturbation FIG4 ), there is a straight line that can easily separate the two classes. The small perturbation causes the features to overlap and there is no linear classifier that can easily separate these two classes FIG4 ). The first two PCs of the 64D features of the clean and adversarial images are shown in FIG4 and FORMULA3, respectively. Again, the PCs are well separated for clean images, while adversarial perturbation causes overlap and concentration. The bottom charts of FIG4 depict the first two PCs of the 64D features output from the layer before the WNLL. The distributions of the unperturbed training and testing data are the same, as illustrated in panels (e) and (f). The new features are better separated which indicates that DNNs with WNLL are more robust to small random perturbation. Panels (g) and (h) plot the features of the adversarial and TV minimized adversarial images in the test set. The adversarial attacks move the automobiles' features to the airplanes' region and TVM helps to eliminate the outliers. Based on our computation, most of the adversarial images of the airplane classes can be correctly classified with the interpolating function. The training data guides the interpolating function to classify adversarial images correctly. The fact that the adversarial perturbations change the features' distribution was also noticed in BID25. DISPLAYFORM0 To defend against adversarials, we combine the ideas of data-dependent activation, input transformation, and training data augmentation. We train ResNet-56, respectively, on the original training data, the TV minimized training data, and a combination of the previous two. On top of the datadependent activation output and augmented training, we further apply the TVM BID23 used by BID9 to transform the adversarial images to boost defensive performance. The basic idea is to reconstruct the simplest image z from the sub-sampled image, X x, with X the mask filled by a Bernoulli binary random variable, by solving the following TVM problem where λ T V > 0 is the regularization constant. DISPLAYFORM0 7 NUMERICAL To verify the efficacy of attack methods for DNNs with WNLL output activation, we consider the transferability of adversarial images. We train ResNet-56 on the aforementioned three types of training data with either softmax or WNLL activation. After the DNNs are trained, we attack them by FGSM, IFGSM, and CW-L2 with different. Finally, we classify the adversarial images by using ResNet-56 with the opponent activation. We list the mutual classification accuracy on adversarial images in Table. 1. The adversarial images ing from attacking DNNs with two types of activation functions are both transferable, as the mutual classification accuracy is significantly lower than testing on the clean images. Overall we see a remarkably higher accuracy when applying ResNet-56 with WNLL activation to classify the adversarial images ing from attacking ResNet-56 with softmax activation. For instance, for DNNs that are trained on the original images and attacked by FGSM, DNNs with the WNLL classifier have at least 5.4% higher accuracy (56.3% v.s. 61.7% ( = 0.08)). The accuracy improvement is more significant in many other scenarios.7.2 ADVERSARIAL DEFENSE FIG5 plots the of adversarial defense by combining the WNLL activation, TVM, and training data augmentation. Panels (a), (b) and (c) show the testing accuracy of ResNet-56 with and without defense on CIFAR-10 data for FGSM, IFGSM, and CW-L2, respectively. It can be observed that with increasing attack strength,, the testing accuracy decreases rapidly. FGSM is a relatively weak attack method, as the accuracy remains above 53.5% (= 0.1) even with the strongest attack. Meanwhile, the defense maintains accuracy above 71.8% (= 0.02). FIG5 (b) and (c) show that both IFGSM and CW-L2 can fool ResNet-56 near completely even with small. The defense maintains the accuracy above 68.0%, 57.2%, respectively, under the CW-L2 and IFGSM attacks. Compared to state-of-the-art defensive methods on CIFAR-10, PixelDefend, our method is much simpler and faster. Without adversarial training, we have shown our defense is more stable to IFGSM, and more stable to all three attacks under the strongest attack than PixelDefend BID25. Moreover, our defense strategy is additive to adversarial training and many other defenses including PixelDefend. To analyze the defensive contribution from each component of the defensive strategy, we separate the three parts and list the testing accuracy in Table. 2. Simple TVM cannot defend FGSM attacks except when the DNNs are trained on the augmented data, as shown in the first and fourth horizontal blocks of the table. WNLL activation improves the testing accuracy of adversarial attacks significantly and persistently. Augmented training can improve the stability consistently as well. In this paper, by analyzing the influence of adversarial perturbations on the geometric structure of the DNNs' features, we propose to defend against adversarial attack by applying a data-dependent activation function, total variation minimization on the adversarial images, and training data augmentation. Results on ResNet-56 with CIFAR-10 benchmark reveal that the defense improves robustness to adversarial perturbation significantly. Total variation minimization simplifies the adversarial images, which is very useful in removing adversarial perturbation. Another interesting direction to explore is to apply other denoising methods to remove adversarial perturbation. | We proposal strategies for adversarial defense based on data dependent activation function, total variation minimization, and training data augmentation | 1,284 | scitldr |
This work presents a scalable solution to continuous visual speech recognition. To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video). In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words. The proposed system achieves a word error rate (WER) of 40.9% as measured on a held-out set. In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information. Our approach significantly improves on previous lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8% and 76.8% WER respectively. Deep learning techniques have allowed for significant advances in lipreading over the last few years BID6 BID72 BID30 BID80. However, these approaches have often been limited to narrow vocabularies, and relatively small datasets BID6 BID72 BID80. Often the approaches focus on single-word classification BID26 BID11 BID76 BID67 BID44 BID68 BID45 BID51 BID52 BID46 BID29 BID4 BID69 BID75 and do not attack the continuous recognition setting. In this paper, we contribute a novel method for large-vocabulary continuous visual speech recognition. We report substantial reductions in word error rate (WER) over the state-of-the-art approaches even with a larger vocabulary. Assisting people with speech impairments is a key motivating factor behind this work. Visual speech recognition could positively impact the lives of hundreds of thousands of patients with speech impairments worldwide. For example, in the U.S. alone 103,925 tracheostomies were performed in 2014 , a procedure that can in a difficulty to speak (disphonia) or an inability to produce voiced sound (aphonia). While this paper focuses on a scalable solution to lipreading using a vast diverse dataset, we also expand on this important medical application in Appendix A. The discussion there has been provided by medical experts and is aimed at medical practitioners. We propose a novel lipreading system, illustrated in Figure 1, which transforms raw video into a word sequence. The first component of this system is a data processing pipeline used to create the Large-Scale Visual Speech Recognition (LSVSR) dataset used in this work, distilled from YouTube videos and consisting of phoneme sequences paired with video clips of faces speaking (3,886 hours of video). The creation of the dataset alone required a non-trivial combination of computer vision and machine learning techniques. At a high-level this process takes as input raw video and annotated audio segments, filters and preprocesses them, and produces a collection of aligned phoneme and lip frame sequences. Compared to previous work on visual speech recognition, our pipeline uses landmark smoothing, a blurriness filter, an improved speaking classifier network and outputs phonemes. The details of this process are described in Section 3. Figure 1: The full visual speech recognition system introduced by this work consists of a data processing pipeline that generates lip and phoneme clips from YouTube videos (see Section 3), and a scalable deep neural network for phoneme recognition combined with a production-grade word-level decoding module used for inference (see Section 4).Next, this work introduces a new neural network architecture for lipreading, which we call Vision to Phoneme (V2P), trained to produce a sequence of phoneme distributions given a sequence of video frames. In light of the large scale of our dataset, the network design has been highly tuned to maximize predictive performance subject to the strong computational and memory limits of modern GPUs in a distributed setting. In this setting we found that techniques such as group normalization BID79 to be key to the reported . Furthermore, our approach is the first to combine a deep learning-based visual speech recognition model with production-grade word-level decoding techniques. By decoupling phoneme prediction and word decoding as is often done in speech recognition, we are able to arbitrarily extend the vocabulary without retraining the neural network. Details of our model and this decoding process are given in Section 4. By design, the trained model only performs well under optimal lighting conditions, within a certain distance from a subject, and at high quality. It does not perform well in other contexts. Finally, this entire lipreading system in an unprecedented WER of 40.9% as measured on a held-out set from our dataset. In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset, depending on the amount of context given. Similarly, previous state-of-the-art approaches such as variants of and of Watch, Attend, and Spell (WAS) demonstrated WERs of only 89.8% and 76.8% respectively. While there is a large body of literature on automated lipreading, much of the early work focused on single-word classification and relied on substantial prior knowledge BID10 BID38 BID53 BID35 BID48 BID82 BID23 BID49 ). For example, BID20 predicted continuous sequences of tri-visemes using a traditional HMM model with visual features extracted from a codebook of clustered mouth region images. The predicted visemes were used to distinguish sentences from a set of 150 possible sentences. Furthermore, BID55 predict words and sequences digits using HMMs, introduce multi-stream HMMs, and improve the performance by using visual features in addition to the lip contours. Later, BID10 used coupled HMMs to jointly model audio and visual streams to predict sequences of digits. BID43 used HMMs for sentence-level speech recognition in noisy environments of the IBM ViaVoice dataset by fusing handcrafted visual and audio features. More recent attempts using traditional speech, vision and machine learning pipelines include the works of BID19 BID47; and BID7. For further details, we refer the reader to the survey material of BID57 and BID83.However, as noted by BID83 and BID6, until recently generalization across speakers and extraction of motion features have been considered open problems. Advances in deep learning have made it possible to overcome these limitations, but most works still focus on single-word classification, either by learning visual-only representations BID26 BID11 BID76 BID67 BID75, multimodal audio-visual representations BID44 BID68 BID45 BID51 BID52, or combining deep networks with traditional speech techniques (e.g. HMMs and GMM-HMMs) BID46 BID29 BID4 BID69.LipNet BID6 was the first end-to-end model to tackle sentence-level lipreading by predicting character sequences. The model combined spatiotemporal convolutions with gated recurrent units (GRUs) and was trained using the CTC loss function. LipNet was evaluated on the GRID corpus BID15, a limited grammar and vocabulary dataset consisting of 28 hours of 5-word sentences, where it achieved 4.8% and 11.4% WER in overlapping and unseen speaker evaluations respectively. By comparison, the performance of competent human lipreaders on GRID was 47.7%. LipNet is the closest model to our neural network. Several similar architectures were subsequently introduced in the works of BID72 who study audio-visual feature fusion, BID30 who work on a small subset of 18 phonemes and 11 words to predict digit sequences, and BID80 who presented a model cascading CTC with attention. were the first to use sequence-to-sequence models with attention to tackle audiovisual speech recognition with a real-world dataset. The model "Watch, Listen, Attend and Spell" (WLAS), consists of a visual (WAS) and an audio (LAS) module. To evaluate WLAS, the authors created LRS, the largest dataset at that point with approximately 246 hours of clips from BBC news broadcasts, and introduced an efficient video processing pipeline. The authors reported 50.2% WER, with the performance of professional lipreaders being 87.6% WER. extended the work to multi-view sentence-level lipreading, achieving 62.8% WER for profile views and 56.4% WER for frontal views. Both and pre-learn features with the audio-video synchronization classifier of BID12, and fix these features in order to compensate for the large memory requirements of their attention networks. Contemporaneously with our work, BID2 presented LRS3-TED, a dataset generated from English language talks available online. Using pre-learned features BID0 presented a seq2seq and a CTC architecture based on character-level self-attention transformer models. On LRS3-TED, these models achieved a WER of 57.9% and 61.8% respectively. Other related advances include works using vision for silent speech reconstruction BID32 BID16 BID3 BID18 and for separating an audio signal to individual speech sources BID17 BID1.In contrast to the approach of BID6, our model (V2P) uses a network to predict a sequence of phoneme distributions which are then fed into a decoder to produce a sequence of words. This flexible design enables us to easily accommodate very large vocabularies, and in fact we can extend the size of the vocabulary without having to retrain the deep network. Unlike previous work, V2P is memory and computationally efficient without requiring pre-trained features.Finally, the data processing pipeline used in this work in a significantly larger and more diverse training dataset than in all previous efforts. While the first large-vocabulary lipreading dataset was IBM ViaVoice BID43, more recently the far larger LRS and MV-LRS datasets were generated from BBC news broadcasts, and the LRS3-TED dataset was generated from conference talks. MV-LRS and LRS3-TED are the only publicly available large-vocabulary datasets, although both are limited to academic usage. In comparison, our dataset (LSVSR) is an order of magnitude greater than any previous dataset with 3,886 hours of audio-video-text pairs. In addition, the content is much more varied (i.e. not news-specific), ing in a 2.2× larger vocabulary of 127,055 words. FIG1 shows a comparison of sentence-level (word sequence) visual speech recognition datasets. In this section we discuss the data processing pipeline, again illustrated in Figure 1, used to create the LSVSR dataset. Our pipeline makes heavy use of large-scale parallel processing and is implemented as a number of independent modules and filters on top of FlumeJava BID9. In particular, our dataset is extracted from public YouTube videos. This is a common strategy for building datasets in ASR and speech enhancement BID33 BID31 BID65 BID17.In our case, we build on the work of BID33 to extract audio clips paired with transcripts, yielding 140,000 hours of audio segments. After post-processing we obtain a dataset consisting of paired video and phoneme sequences, where video sequences are represented as identically-sized frames (here, 128×128) stacked in the time-dimension. Although our pipeline is used to process clips pre-selected from YouTube BID33, only about 2% of clips satisfy our filtering criteria. Finally, by eliminating the components marked by dashes in Figure 1, i.e. those components whose primary use are in producing paired training data, this same pipeline can be used in combination with a trained model to predict word sequences from raw videos. In what follows we describe the individual components that make up this pipeline. Length filter, language filter. The duration of each segment extracted from YouTube is limited to between 1 and 12 seconds, and the transcripts are filtered through a language classifier BID61 to remove non-English utterances. For evaluation, we further remove the utterances containing fewer than 6 words. Finally, the aligned phoneme sequences are obtained via a standard forced alignment approach using a lexicon with multiple pronunciations BID33. The phonetic alphabet is a reduced version of X-SAMPA with 40 phonemes plus silence. Raw videos, shot boundary detection, face detection. Constant spatial padding in each video segment is eliminated before a standard, thresholding color histogram classifier BID36 identifies and removes segments containing shot boundaries. FaceNet BID62 is used to detect and track faces in every remaining segment. Clip quality filter. Speech segments are joined with the set of tracked faces identified in the previous step and filtered based on the quality of the video, removing blurry clips and clips including faces with an eye-to-eye width of less than 80 pixels. Frame rates lower than 23fps are also eliminated BID60 BID71. We allow a range of input frame rates-varying frame rates has an effect similar to different speaking paces-however, frame rates above 30fps are downsampled. Face landmark smoothing. The segments are processed by a face landmark tracker and the ing landmark positions are smoothed using a temporal Gaussian kernel. Intuitively, this simplifies learning filters for the 3D convolution layers by reducing spatiotemporal noise. Empirically, our preliminary studies showed smoothing was crucial for achieving optimal performance. Next, following previous literature, we keep segments where the face yaw and pitch remain within ±30°. Models trained outside this range perform worse.View canonicalization. We obtain canonical faces using a reference canonical face model and by applying an affine transformation on the landmarks. Then, we use a thumbnail extractor which is configured to crop the area around the lips of the canonical face. Speaking filter. Using the extracted and smoothed landmarks, minor lip movements and nonspeaking faces are discarded using a threshold filter. This process involves computing the mouth openness in all frames, normalizing by the size of the face bounding box, and then thresholding on the standard deviation of the normalized openness. This classifier has very low computational cost, but high recall, e.g. voice-overs are not handled. It's contribution was very important to process approximately 16 years of audio-video-text pairs within reasonable time. Speaking classifier. As a final step, we build V2P-Sync, a neural network architecture to verify the audio and video channel alignment inspired by the work of BID12 and BID74. However, V2P-Sync takes advantage of face landmark smoothing, 3D convolutions, and high resolution inputs. V2P-Sync uses longer time segments as inputs and spatiotemporal convolutions as compared to the spatial-only convolutions of Chung & Zisserman, and view canonicalization and higher resolution inputs (128 × 128 vs 100 × 60) as compared to Torfi et al.. These characteristics facilitate the extraction of temporal features which is key to our task. V2P-Sync, takes as input a pair of a log mel-spectrogram and 9 grayscale video frames and produces an embedding for each using two separate neural network architectures. If the Euclidean distance of the audio and video embeddings is less than a given threshold the pair is classified as synchronized. The architecture is trained using a contrastive loss similar to Chung & Zisserman. Since there is no labeled data for training, the initial unfiltered pairs are used as positive samples with negative samples generated by randomly shifting the video of an unfiltered pair. After convergence the dataset is filtered using the trained model, which is then fine-tuned on the ing subset of the initial dataset. The final model is used to filter the dataset a second time, achieving an accuracy of 81.2%. This accuracy is improved as our audio-video pairs are processed by sliding V2P-Sync on 100 equally spaced segments and their scores are averaged. For further architectural details, we refer the reader to Appendix F. This work introduces the V2P model, which consists first of a 3d convolutional module for extracting spatiotemporal features from a given video clip. These features are then aggregated over time with a temporal module which outputs a sequence of phoneme distributions. Given input video clips and target phoneme sequences the model is trained using the CTC loss function. Finally, at test-time, a decoder based on finite state transducers (FSTs) is used to produce a word sequence given a sequence of phoneme distributions. For further details we refer the reader to Appendix G.Neural network architecture. Although the use of optical-flow filters as inputs is commonplace in lipreading BID37 BID22 BID81 BID70 BID77 BID63, in this work we designed a vision module based on VGG BID64 to explicitly address motion feature extraction. We adapted VGG to make it volumetric, which proved crucial in our preliminary empirical evaluation and has been established in previous literature BID6. The intuition behind this is the importance of spatiotemporal relationships in human visual speech recognition, e.g. measuring how lip shape changes over time. Furthermore, the receptive field of the vision module is 11 video frames, roughly 0.36-0.44 seconds, or around twice the typical duration of a phoneme. One of the main challenges in training a large vision module is finding an effective balance between performance and the imposed constraints of GPU memory. Our vision module consists of 5 convolutional layers with filters. By profiling a number of alternative architectures, we found that high memory usage typically came from the first two convolutional layers. To reduce the memory footprint we limit the number of convolutional filters in these layers, and since the frame is centered around the lips, we omit spatial padding. Since phoneme sequences can be quite long, but with relatively low frame rate (approximately 25-30 fps), we maintain padding in the temporal dimension and always convolve with unit stride in order to avoid limiting the number of output tokens. Despite tuning the model to reduce the number of activations, we are still only able to fit 2 batch elements on a GPU. Hence, we distribute training across 64 workers in order to achieve a batch size of 128. Due to communication costs, batch normalization is expensive if one wants to aggregate the statistics across all workers, and using only two examples per batch in noisy normalization statistics. Thus, instead of batch normalization, we use group normalization BID79, which divides the channels into groups and computes the statistics within these groups. This provides more stable learning regardless of batch size. The outputs of the convolutional stack are then fed into a temporal module which performs longerscale aggregation of the extracted features over time. In constructing this component we evaluated a number of recurrent neural network and dilated convolutional architectures, the latter of which are evaluated later as baselines. The best architecture presented performs temporal aggregation using a stack of 3 bidirectional LSTMs BID27 ) with a hidden state of 768, interleaved with group normalization. The output of these LSTM layers is then fed through a final MLP layer to produce a sequence of exactly T conditionally independent phoneme distributions p(u t |x). This entire model is then trained using the CTC loss we describe next. This model architecture is similar to that of the closest related work, LipNet BID6, but differs in a number of crucial ways. In comparison to our work, LipNet used GRU units and dropout, both of which we found to perform poorly in preliminary experiments. Our model is also much bigger: LipNet consists of only 3 convolutional layers of filters and 3 GRU layers with hidden state of size 256. Although the small size of LipNet means that it does not require any distributed computation to reach effective batch sizes, we will see that this drop in size coincides with a similar drop in performance. Finally, while both models use a CTC loss for training, the architecture used in V2P is trained to predict phonemes rather than characters; as we argue shortly this provides V2P with a much simpler mechanism for representing word uncertainty. CTC is a loss function for the parameterization of distributions over sequences of label tokens, without requiring alignments of the input sequence to the label tokens BID21. To see how CTC works, let V denote the set of singletimestep label tokens. To align a label sequence with size-T sequences given by the temporal module, CTC allows the model to output blank symbols and repeat consecutive symbols. Let the function B: (V ∪ {}) * → V * be defined such that, given a string potentially containing blank tokens, it deletes adjacent duplicate characters and removes any blanks. The probability of observing label sequence y can then be obtained by marginalizing over all possible alignments of this label, DISPLAYFORM0, where x is input video. For example, if T = 5 the probability of sequence'bee' is given by p(be e) + p(be e) + · · · + p(bbe e) + p(be ee). Note that there must be a blank between the'e' characters to avoid collapsing the sequence to'be'.Since CTC prevents us from using autoregressive connections to handle inter-timestep dependencies of the label sequence, the marginal distributions produced at each timestep of the temporal module are conditionally independent, as pointed out above. Therefore, to restore temporal dependency of the labels at test-time, CTC models are typically decoded with a beam search procedure that combines the probabilities with that of a language model. Rationale for phonemes and CTC. In speech recognition, whether on audio or visual signals, there are two main sources of uncertainty: uncertainty in the sounds that are in the input, and uncertainty in the words that correspond to these sounds. This suggests modelling DISPLAYFORM1 where the approximation is by the assumption that a given word sequence often has a single or dominant pronunciation. While previous work uses CTC to model characters given audio or visual input directly BID6 BID5, we argue this is problematic as the conditional independence of CTC timesteps means that the temporal module must assign a high probability to a single sequence in order to not produce spurious modes in the CTC distribution. To explain why modeling characters with CTC is problematic, consider two character sequences "fare" and "fair" that are homophones, i.e. they have the same pronunciation (i.e. /f :/). The difficulty we will describe is independent of the model used, so we will consider a simple unconditional model where each character c is assigned probability given by the parameters π c t = P (u t = c) and the probability of a sequence is given by its product, e.g. p(fare) = π. The maximum likelihood estimate, arg max π p(fare)p(fair), however, assigns equal 1/4 probability to each of "fare", "fair", "faie", "farr", as shown in FIG2, ing in two undesirable words. Ultimately this difficulty arises due to the independence assumption of CTC and the many-to-many mapping of characters to words 1. This same difficulty arises if we replace the parameters above with the outputs of a network mapping from videos to tokens. An additional source of uncertainty in visual speech recognition is introduced by the fact that the information required to disambiguate different phonemes is not visible (e.g. the position of the tongue). The ing visually similar phonemes are called visemes. This can be seen in Appendix C where the ratio of insertions and substitutions when computing the edit distance of a visual model is substantially higher than the ratio of an audio model trained on the same dataset. Furthermore, recent literature suggests that phoneme-to-viseme mappings may differ per speaker BID8, making it difficult to incorporate this knowledge in the design. Thus, using phonemes, which have a one-to-many mapping to words, allows the temporal model to only model visual uncertainty, and the word uncertainty can instead be handled by the decoder described below. Alternatively to using phonemes with CTC, some previous work solves this problem using RNN transducers or sequence-to-sequence with attention, which jointly model all sources of uncertainty. However, showed in the context of acoustic speech recognition that these models were unable to significantly outperform a baseline CTC model (albeit using context-dependent phonemes and further sequence-discriminative training) when combined with a decoding pipeline similar to ours. Hence, for reasons of performance and easier model training, especially important with our large model, we choose to output phonemes rather than words or characters directly. Additionally, and crucial for many applications, CTC also provides extra flexibility over alternatives. The fact that the lexicon (phoneme to word mapping) and language model are separate and part of the decoder, affords one the ability to trivially change the vocabulary and language model (LM) arbitrarily. This allows for visual speech recognition in narrower domains or updating the vocabulary and LM with new words without requiring retraining of the phoneme recognition model. This is nontrivial in other models, where the language model is part of the RNN.Decoding. As described earlier, our model produces a sequence of phoneme distributions; given these distributions we use an industry-standard decoding method using finite state transducers (FSTs) to arrive at word sequences. Such techniques are extensively used in speech recognition (e.g. BID40 BID39 ; we refer the reader to the thorough presentation of BID41 . In our work we make use of a combination of three individual (weighted) FSTs, or WFSTs. The first CTC postprocessing FST removes duplicate symbols and CTC blanks. Next, a lexicon FST maps input phonemes to output words. Third, an n-gram language model with backoff can be represented as a WFST from words to words. In our case, we use a 5-gram model with Katz backoff with about 50 million n-grams and a vocabulary size of about one million. The composition of these three FSTs another WFST transducing from phoneme sequences to (reweighted) word sequences. Finally, a search procedure is employed to find likely word sequences from phoneme distributions. We examine the performance of V2P trained on LSVSR with hyperparameters tuned on a validation set. We evaluate it on a held-out test set roughly 37 minutes long, containing approximately 63,000 video frames and 7100 words. We also describe and compare against a number of alternate methods from previous work. In particular, we show that our system gives significant performance improvements over professional lipreaders as well previous state-of-the-art methods for visual speech recognition. Except for V2P-NoLM, all models used the same 5-gram word-level language model during decoding. To construct the validation and test sets we removed blurry videos by thresholding the variance of the Laplacian of each frame BID50; we kept them in the training set as a form of data augmentation. Professional lipreaders. We consulted a professional lipreading company to measure the difficulty of LSVSR and hence the impact that such a model could have. Since the inherent ambiguity in lipreading necessitates relying on context, we conducted experiments both with and without context. In both cases we generate modified clips from our test set, but cropping the whole head in the video, as opposed to just the mouth region used by our model. The lipreaders could view the video up to 10 times, at half or normal speed each time. To measure without-context performance, we selected clips with transcripts that had at least 6 words. To measure how much context helps performance, we selected clips with at least 12 words, and presented to the lipreader the first 6 words, the title, and the category of the video, then asked them to transcribe the rest of the clip. The lipreaders transcribed a subset of our test set containing 153 and 274 videos with and without context, respectively. Audio-Ph. For an approximate bound on performance, we train an audio speech recognition model on the audio of the utterances. The architecture is based on Deep Speech 2 BID5, but trained to predict phonemes rather than characters. Baseline-LipNet-Ch. Using our training setup, we replicate the character-level CTC architecture of LipNet BID6. As with the phoneme models, we use an FST decoding pipeline and the same language model, but instead of a phoneme-based lexicon we use a character-level one as described in BID40.Baseline-LipNet-Ph. We also train LipNet to predict phonemes, still with CTC and using the same FST-based decoding pipeline and language model. Baseline-LipNet-Large-Ph. Recall from the earlier discussion that LipNet uses dropout, whereas V2P makes heavy use of group normalization, crucial for our small batches per worker. For a fair size-wise comparison, we introduce a replica of V2P, that uses GRUs, dropout, and no normalization. Baseline-Seq2seq-Ch. Using our training setup, we compared to a variant of the previous stateof-the-art sequence-to-sequence architecture of WAS that predicts character sequences. Although their implementation was followed as closely as possible, training end-toend quickly exceeded the memory limitations of modern GPUs. To work around these problems, the authors kept the convolutional weights fixed using a pretrained network from audio-visual synchronization classification BID12 ), which we were unable to use as their network inputs were processed differently. Instead, we replace the 2D convolutional network with the improved lightweight 3D visual processing network of V2P. From our empirical evaluation, including preliminary experiments not reported here and as shown by earlier work BID6, we believe that the 3D spatiotemporal aggregation of features benefits performance. After standard beam search decoding, we use the same 5-gram word LM as used for the CTC models to perform reranking. V2P-FullyConv. Identical to V2P, except the LSTMs in the temporal aggregation module are replaced with 6 dilated temporal convolution layers with a kernel size of 3 and dilation rates of, yielding a fully convolutional model with 12 layers. V2P-NoLM. Identical to V2P, except during decoding, where the LM is replaced with a dictionary consisting of 100k words. The words are then weighted by their smoothed frequency in the training data, essentially a uni-gram language model. TAB1 shows the phoneme error rate, character error rate, and word error rate for all of the models, and the number of parameters for each. The error rates are computed as the sum of the edit distances of the predicted and ground-truth sequence pairs divided by total ground-truth length. We also compute and display the standard error associated with each rate, estimated by bootstrap sampling. These show that the variant of LipNet tested in this work is approximately able to perform on-par with professional lipreaders with WER of 86.4% and 89.8% respectively, even when the given professional is given additional context. Similarly, we see that the WAS variant provides a substantial reduction to this error, ing in a WER of 76.8%. However, the full V2P method presented in this Professional w/o context − − − 92.9 ± 0.9 Professional w/ context − − − 86.4 ± 1.4 Audio-Ph 58M 12.5 ± 0.5 11.5 ± 0.6 18.3 ± 0.9Baseline-LipNet-Ch 7M − 64.6 ± 0.5 93.0 ± 0.6 Baseline-LipNet-Ph 7M 65.8 ± 0.4 72.8 ± 0.5 89.8 ± 0.5 Baseline-Seq2seq-Ch 15M − 49.9 ± 0.6 76.8 ± 0.8 Baseline-LipNet-Large-Ph 40M 53.0 ± 0.5 54.0 ± 0.8 72.7 ± 1.0 V2P-FullyConv 29M 41.3 ± 0.6 36.7 ± 0.9 51.6 ± 1.2 V2P-NoLM 49M 33.6 ± 0.6 34.6 ± 0.8 53.6 ± 1.0 V2P 49M 33.6 ± 0.6 28.3 ± 0.9 40.9 ± 1.2 work is able to further halve the WER, obtaining a value of 40.9% at testing time. Interestingly, we see that although the bi-directional LSTM provides the best performance, using a fully-convolutional network still in performance that is significantly better than all previous methods. Finally, although we see that the full V2P model performs best, removing the language model only in a drop of approximately 13 WER to 53.6%.By predicting phonemes directly, we also side-step the need to design phoneme-to-viseme mappings BID8. The inherent uncertainty is instead modelled directly in the predictive distribution. For instance, using edit distance alignments of the predictions to the ground-truths, we can determine which phonemes were most frequently erroneously included or missed, as shown in FIG4. Here we normalize the rates of deletions vs insertions, however empirically we saw that deletions were much more common than inclusions. Among these errors the most common include phonemes that are often occluded by the teeth (/d/, /n/, and /t/) as well as the most common English vowel /@/. Finally, by differentiating the likelihood of the phoneme sequence with respect to the inputs using guided backpropagation , we compute the saliency maps shown in the top row of FIG6 as a white overlay. The entropy at each timestep of the phoneme predictive distribution is shown as well. A full confusion matrix, absolute deletion/insertion/substitution counts, and additional saliency maps are shown in Appendices B, C and E. 47.0 ± 1.6 55.1 ± 0.9 DISPLAYFORM0 To demonstrate the generalization power of our V2P approach, we also compare it to the of the TM-seq2seq model of BID0 on LRS3-TED BID2. Unlike LSVSR, the LRS3-TED dataset includes faces at angles between ±90°instead of ±30°, and clips may be shorter than one second. Despite the fact that we do not train or fine-tune V2P on LRS3-TED, our approach still outperforms the state-of-the-art model trained on that dataset in terms of test set accuracy. In particular, we conducted two experiments. First, we evaluated performance on a subset of the LRS3-TED test set filtered according to the same protocol used to construct LSVSR, by removing instances with larger face angles and shorter clips (Filtered Test). Second, we tested on the full unfiltered test set (Full Test). In both cases, V2P outperforms TM-seq2seq, achieving WERs of 47.0 ± 1.6 and 55.1 ± 0.9 respectively. This shows that our approach is able to generalize well, achieving state-of-the-art performance on datasets, with different conditions, on which it was not trained. We presented a novel, large-scale visual speech recognition system. Our system consists of a data processing pipeline used to construct a vast dataset-an order of magnitude greater than all previous approaches both in terms of vocabulary and the sheer number of example sequences. We described a scalable model for producing phoneme and word sequences from processed video clips that is capable of nearly halving the error rate of the previous state-of-the-art methods on this dataset, and achieving a new state-of-the-art in a dataset presented contemporaneously with this work. The combination of methods in this work represents a significant improvement in lipreading performance, a technology which can enhance automatic speech recognition systems, and which has enormous potential to improve the lives of speech impaired patients worldwide. A MEDICAL APPLICATIONS As a consequence of injury or disease and its associated treatment, millions of people worldwide have communication problems preventing them from generating sound. As hearing aids and cochlear transplants have transformed the lives of people with hearing loss, there is potential for lip reading technology to provide alternative communication strategies for people who have lost their voice. Aphonia is the inability to produce voiced sound. It may from injury, paralysis, removal or other disorders of the larynx. Common examples of primary aphonia include bilateral recurrent laryngeal nerve damage as a of thyroidectomy (removal of the thyroid gland and any tumour) for thyroid cancer, laryngectomy (surgical removal of the voice box) for laryngeal cancers, or tracheostomy (the creation of an alternate airway in the neck bypassing the voicebox). Dysphonia is difficulty in speaking due to a physical disorder of the mouth, tongue, throat, or vocal cords. Unlike aphonia, patients retain some ability to speak. For example, in Spasmodic dysphonia, a disorder in which the laryngeal muscles go into periods of spasm, patients experience breaks or interruptions in the voice, often every few sentences, which can make a person difficult to understand. We see this work having potential medical applications for patients with aphonia or dysphonia in at least two distinct settings. Firstly, an acute care setting (i.e. a hospital with an emergency room and an intensive care unit), patients frequently undergo elective (planned) or emergency (unplanned) procedures (e.g. Tracheostomy) which may in aphonia or dysphonia. In the U.S. 103,925 tracheostomies were performed in 2014, ing in an average hospital stay of 29 days . Similarly, in England and Wales 15,000 tracheostomies are performed each year.Where these procedures are unplanned, there is often no time or opportunity to psychologically prepare the patient for their loss of voice, or to teach the patient alternative communication strategies. Some conditions that necessitate tracheotomy, such as high spinal cord injuries, also affect limb function, further hampering alternative communication methods such as writing. Even where procedures are planned, such as for head and neck cancers, despite preparation of the patient through consultation with a speech and language therapist, many patients find their loss of voice highly frustrating especially in the immediate post-operative period. Secondly, where surgery has left these patients cancer-free, they may live for many years, even decades without the ability to speak effectively, in these patients we can envisage that they may use this technology in the community, after discharge from hospital. While some patients may either have tracheotomy reversed, or adapt to speaking via a voice prosthesis, electro-larynx or esophageal speech, many patients do not achieve functional spoken communication. Even in those who achieve good face-to-face spoken communication, few laryngectomy patients can communicate effectively on the telephone, and face the frequent frustration of being hung-up on by call centres and others who do not know them. Acute care applications. It is widely acknowledged that patients with communication disabilities, including speech impairment or aphonia can pose significant challenges in the clinical environment, especially in acute care settings, leading to potentially poorer quality of care BID42. While some patients will be aware prior to surgery that they may wake up unable to speak, for many patients in the acute setting (e.g. Cervical Spinal Cord Injury, sudden airway obstruction) who wake up following an unplanned tracheotomy, their sudden inability to communicate can be phenomenally distressing. Community applications. Patients who are discharged from hospital without the ability to speak, or with poor speech quality, face a multitude of challenges in day-to-day life which limits their independence, social functioning and ability to seek employment. We hypothesize that the application of technology capable of lip-reading individuals with the ability to move their facial muscles, but without the ability to speak audibly could significantly improve quality of life for these patients. Where the application of this technology improves the person's ability to communicate over the telephone, it would enhance not only their social interactions, but also their ability to work effectively in jobs that require speaking over the phone. Finally, in patients who are neither able to speak, nor to move their arms, this technology could represent a step-change in terms of the speed at which they can communicate, as compared to eye-tracking or facial muscle based approaches in use today. To compute the confusion matrix and the insertion/deletion chart shown in the main text in FIG4, we first compute the edit distance dynamic programming matrix between each predicted sequence of phonemes and the corresponding ground-truth. Then, a backtrace through this matrix gives an alignment of the two sequences, consisting of edit operations paired with positions in the prediction/ground-truth sequences. Counting the correct phonemes and the substitutions yields the confusion matrix Figure 7. The reader can note that the diagonal is strongly dominant. A few groups are commonly confused as expected due to their visual similarity, such as {/d/, /n/, /t/}, and to a lesser extent {/b/, /p/}.Counting insertions/deletions yields FIG4 in the main text, showing which phonemes are most commonly omitted (deleted), or less frequently, erroneously inserted. Using the same method for computing the edit distance as described in Appendix B, TAB3 shows the absolute numbers of insertions/deletions/substitutions for the Audio-Ph and the V2P model. As it can be seen, the percentage of insertions and substitutions is substantially higher when using the visual channel (V2P) compared to the audio channel (Audio-Ph). Different phonemes can be visually identical and the information to disambiguate them is missing from the visual channel, this may to a higher insertions and substitutions rate. TAB4 are optimized using a batch size of 128, batch normalization, and Adam BID28 ) with a learning rate of 10 −4 and default hyperparameters: first and second momentum coefficients 0.9 and 0.999 respectively, and = 10 −8 for numerical stability. Layer Filter size Stride Output channels Input conv1 3 × 3 × 3 1 × 2 × 2 16 9 × 128 × 128 × 1 pool1 1 × 2 × 2 1 × 2 × 2 7 × 63 × 63 × 16 conv2 3 × 3 × 3 1 × 1 × 1 32 7 × 31 × 31 × 16 pool2 1 × 2 × 2 1 × 2 × 2 5 × 29 × 29 × 32 conv3 3 × 3 × 3 1 × 1 × 1 64 5 × 14 × 14 × 32 pool3 1 × 2 × 2 1 × 2 × 2 3 × 12 × 12 × 64 conv4 3 × 3 × 3 1 × 1 × 1 128 3 × 6 × 6 × 64 pool4 1 × 2 × 2 1 × 2 × 2 1 × 4 × 4 × 128 fc5 1 × 1 × 1 256 512 fc6 1 × 1 × 1 64 256 and default hyperparameters: first and second momentum coefficients 0.9 and 0.999 respectively, and = 10 −8 for numerical stability. Furthermore, to accelerate learning, a curriculum schedule limits the video duration, starting from 2 seconds and gradually increasing to a maximum length of 12 seconds over 200,000 training steps. Finally, image transformations are also applied to augment the image frames to help improve invariance to filming conditions. This is accomplished by first randomly mirroring the videos horizontally, followed by random changes to brightness, contrast, saturation, and hue. | This work presents a scalable solution to continuous visual speech recognition. | 1,285 | scitldr |
Previous work has found difficulty developing generative models based on variational autoencoders (VAEs) for text. To address the problem of the decoder ignoring information from the encoder (posterior collapse), these previous models weaken the capacity of the decoder to force the model to use information from latent variables. However, this strategy is not ideal as it degrades the quality of generated text and increases hyper-parameters. In this paper, we propose a new VAE for text utilizing a multimodal prior distribution, a modified encoder, and multi-task learning. We show our model can generate well-conditioned sentences without weakening the capacity of the decoder. Also, the multimodal prior distribution improves the interpretability of acquired representations. Research into generative models for text is an important field in natural language processing (NLP) and various models have been historically proposed. Although supervised learning with recurrent neural networks is the predominant way to construct generative language models BID22 BID28 BID26, auto-regressive word-by-word sequence generation is not good at capturing interpretable representations of text or controlling text generation with global features BID1. In order to generate sentences conditioned on probabilistic latent variables, BID1 proposed Variational Autoencoders (VAEs) BID11 for sentences. However, some serious problems that prevent training of the model have been reported. The problem that has been mainly discussed in previous papers is called "posterior collapse" BID25. Because decoders for textual VAEs are trained with "teacher forcing" BID27, they can be trained to some extent without relying on latent variables. As a , the KL term of the optimization function (Equation 1) converges to zero and encoder input is ignored BID1. Successful textual VAEs have solved this problem by handicapping the decoder so the model is forced to utilize latent variables BID1 BID30. However, we believe that weakening the capacity of the decoder may lower the quality of generated texts and requires careful hyper-parameter turning to find the proper capacity. Therefore, we take a different approach. We focus on two overlooked problems. First, previous research fails to address the problem inherent to the structure of VAEs. The fundamental cause of posterior collapse (apart from teacher forcing) is the existence of a suboptimal local minimum for the KL term. Second, although existing models use a LSTM as the encoder, it is known that this simple model is not sufficient for text generation tasks (; BID14 BID26 . In this work, we propose a new architecture for textual VAEs with two modifications to solve these problems. First, we use a multimodal prior distribution and an unimodal posterior distribution to eliminate the explicit minima of ignoring the encoder (Chapter 3.2). Multimodal prior distributions for VAEs have been proposed recently for image and video tasks BID7 BID3. Specifically, our model uses a Gaussian Mixture distribution as prior distribution which is trained with the method proposed by BID23.(a) The overall architecture of existing models.(b) The overall architecture of our model. In the encoder, hidden states of the self-attention Encoder and BoW are concatenated. The decoder estimates BoW of the input text from the latent variables as a sub-task in addition to generating text. In our model, the prior distribution of the latent variables is a Gaussian mixture model. Second, we modify the encoder (Chapter 3.3). We empirically compare a number of existing encoders and adopt a combination of two. The first is the recently proposed method of embedding text into fixed-size variables using the attention mechanism BID12. Although this method was originally proposed for classification tasks, we show this encoder is also effective at text generation tasks. The second is a a Bag-of-Words encoding of input text to help the encoder. It has been reported that a simple Bag-of-Words encoding is effective at embedding the semantic content of a sentence BID18. Our experiments show that the modified encoder produces improved only when other parts of the model are modifed as well to stabilize training. Additionally, our imply that the self-attention encoder captures grammatical structure and Bag-of-Words captures semantic content. Finally, to help the model acquire meaningful latent variables without weakening the decoder, we add multi-task learning (Chapter 3.4). We find that a simple sub-task of predicting words included in the text significantly improves the quality of output text. It should be noted that this task does not cause posterior collapse as it does not require teacher forcing. With these modifications, our model outperforms baselines on BLEU score, showing that generated texts are well conditioned on information from the encoder (Chapter 4.3). Additionally, we show that each component of the multimodal prior distribution captures grammatical or contextual features and improves interpretability of the global features (Chapter 4.5). BID1 is the first work to apply VAEs to language modeling. They identify the problem of posterior collapse for textual VAEs and propose the usage of word dropout and KL annealing. BID16 models text as Bag-of-Words with VAEs. This is part of the motivation behind the usage of Bag-of-Words for textual VAEs. BID30 hypothesize that posterior collapse can be prevented by controlling the capacity of the decoder and propose a model with a dilated CNN decoder which allows changing the effective filter size. BID21 use a deconvolutional layer without teacher forcing to force the model into using information from the encoder. Our use of a multimodal prior distribution is inspired by previous works which try to modify prior distributions of VAEs. BID7 and BID3 apply a VAE with Gaussian Mixture prior distribution to video and clustering, respectively. BID23 propose the construction of a prior distribution from a mixture of posterior distributions of some trainable pseudo-inputs. Another recent proposal to restrict the latent variables is to use discrete latent variables BID20 BID25. Some discrete autoencoder models for text modeling has been proposed. While some show promise, discretization such as Gumbel-Softmax BID6 and Vector Quantization BID25 ) is required to train discrete autoencoders with gradient descent as the gradient of discrete hidden state cannot be calculated directly. A multimodal prior distribution can be regarded as a smoothed autoencoder model with discrete latent variables BID3 ) without a requirement for discretization. 3.1 VARIATIONAL AUTOENCODER FOR TEXT GENERATION 3.1.1 VARIATIONAL AUTOENCODER A RNN language model is trained to learn a probability distribution of the next word x t conditioned on all previous words x 1, x 2,..., x t−1 BID17. A language model conditioned on a deterministic latent vector z (such as input text representation) has been proposed as well BID22: DISPLAYFORM0 Although these models can be regarded as a generative model with auto-regressive sampling, they cannot capture interpretable probabilistic structures of global features. BID1 propose a new language model which explicitly captures probabilistic latent variables of global features with Variational Autoencoders BID11.Variational Autoencoders (VAEs) are one way to construct a generative model based on neural networks, which learns Variational Bayes through gradient decent. A VAE has an encoder q φ (z|x) and a decoder p θ (x|z) each parameterized by a neural network. In many cases, a standard Gaussian distribution is used for the prior distribution of the latent vector p(z) and a Gaussian distribution is used for q φ (z|x). Instead of directly maximizing the intractable marginal probability p(x) = p(z)p θ (x|z)dz, we maximize the evidence lower bound: DISPLAYFORM1 = L ELBO As the model samples from q φ (z|x), the reparameterization trick BID11 can be used to train the model with gradient descent. Previous work on textual VAEs BID1 BID30 simply applied this model to sequence-to-sequence text generation models FIG0 ). Recent works BID1 BID30 have identified several obstacles for training VAEs for text generation. One of the largest problems, referred to as "posterior collapse" BID25, is that training textual VAEs often drives the second term of Equation 1 (KL term) close to zero BID1. When the KL term becomes zero, no information from the input text is reflected on latent variables since q φ (z|x) and p(z) are identical. This is an undesirable outcome since latent variables are expected to capture a meaningful representation of input to generate conditional output. However, to aid stabilization, the previous ground truth word is given to the decoder each time during training (teacher forcing BID27). As this technique is applied to textual VAEs as well, a simple language model based on LSTM can be trained without information from the decoder and cause posterior collapse. In order to solve this problem, previous methods try to weaken the decoder to force the model to use information from the encoder. However, weakening the capacity of the decoder is not an ideal strategy since it can lower the quality of generated text and requires additional hyper-parameters specifying decoder capacity. In this paper, we propose three modifications to the model and successfully improve upon textual VAEs without restricting the capacity of the decoder. These modifications are explained in the following chapters: 3.2, 3.3, and 3.4. In typical VAEs, a standard normal distribution N is used as the prior distribution p(z) and a normal distribution N (µ, σ 2) is used as the posterior distribution q φ (z|x). Although this model is also used for previous textual VAE models BID1 BID30, there is a trivial local minimum p(z) = q φ (z|x) which makes KL(q φ (z|x)|p(z)) in Equation 1 zero, manifesting in what is referred to as posterior collapse. Roughly speaking, we can avoid this if q φ (z|x) cannot be identical to p(z). One simple way to achieve this is to use a multimodal distribution as the prior distribution p(z) and an unimodal distribution as the posterior distribution q φ (z|x). This idea is motivated by recently proposed VAE models with a multimodal prior distribution for image and video generation BID7 BID3. We provide further explanation in Appendix A and discuss that modification for the decoder is not necessary if the problems in prior distribution is fixed. The problem with using a multimodal distribution as a prior for VAEs is deciding on what kind of distribution to use. Models which learn a multimodal prior distribution along with other parts of the VAE have been recently proposed BID7 BID3 BID23. One successful model uses a multimodal prior distribution of a variational mixture of posteriors prior (VampPrior) BID23. VampPrior VAEs have multiple trainable pseudo-inputs u k and regard the mixture of the posterior distributions of the pseudo-inputs DISPLAYFORM0 as the prior distribution (K is a pre-defined number of pseudo-inputs). Pseudoinputs are trained at the same time as the other components of the VAE. Although pseudo-inputs have the same size as the input image for the VAE in the original work BID23, we use pseudo-inputs which are projected onto µ and σ directly FIG1 ).In our experiments, we find a multimodal prior distribution performs unsupervised clustering and each component of multimodal prior distribution captures specific features of a sentence. Moreover, the components themselves also form clusters, creating a hierarchical structure within the representation space (Chapter 4.5). Existing models of textual VAEs use a simple LSTM as an encoder BID1 BID30. However, recent research into text generation has found that simple LSTMs do not have enough capacity to encode information from the whole text. Motivated by the of our experiments (Chapter 3.3), we propose concatenating the representation from the self-attention encoder and Bag-of-Words information. Ideally, self-attention encodes grammatical structure and Bag-of-Words encodes overall meaning. Our experiments imply our model is successful in this kind of division of roles (Chapter 4.4). The attention mechanism (; BID14) is a popular model to encode text with LSTMs. Since VAEs are models with fixed size probabilistic latent variables, this mechanism with variable size representation cannot be applied directly. Therefore, we use a recently proposed method called self-attention BID12 (Figure 3), an effective model to embed text into a fixed Figure 3: A self-attention encoder. This model encodes variable length input into a fixed length representation using an attention mechanism. The fixed length representation is acquired by summing up the hidden states of the bi-directional LSTM based on attention weights. Attention weights a s1,..., a sn are calculated by (a s1, . . ., a sn) = softmax(w s2 tanh(W 1 H T)).size vector representation for classification tasks using an attention mechanism. Our experiments show that embedded representations from self-attention are useful for text generation. The self-attention model uses hidden states of bi-directional LSTM h 1,..., h n with variable length. To acquire a fixed sized representation m s, hidden states are summarized with attention weights m s = n i=1 a si h i. Attention weights are calculated by using a weight matrix W 1 with shape d-by-2u (u is the size of a hidden state of bi-directional LSTM and d is a hyper-parameter) and a vector w s2 with size d: DISPLAYFORM0 Here H is a n-by-2u matrix of the hidden states H = (h 1, . . ., h n). To get richer information, r different weights (r is a hyper-parameter) are calculated with a r-by-d weight matrix W 2 = (w 12, . . ., w r2) in the model: DISPLAYFORM1 Here the softmax is performed along the second dimension. Finally, a fixed sized representation is acquired by M = AH. We simply flatten the matrix M into a representation vector. All parameters are trained with gradient descent. Previous research shows the effectiveness of Bag-of-Words in NLP tasks such as text classification BID5. Because the difficulty of encoding the content of the input sentence with LSTM is known, we propose using a simple Bag-of-Words input to encode the content of the sentence for text generation tasks. Also, since VAEs are trained in a stochastic manner, it is difficult to train the encoder. Since Bag-of-Words input is much easier to train compared to LSTMs and self-attention encoders, it will help stabilize training. We simply summarize word representation of all words in the input text and project this vector with a linear layer. In NLP deep learning tasks, some methods to improve the performance of the main task with multitask learning has been reported. For example, multi-lingual training even improves the of each language in translation task BID4 and sub-task of phone recognition improves the of speech recognition BID24. One of the effects of multi-task learning is said that it enables to acquire better intermediate representations BID13. Also, a recently proposed model to encode chemical structure with VAEs show that multi-task learning improves the quality of embedded representation BID19.To address the largest problem of VAEs for text, the difficulty in learning meaningful latent variables, we propose using multi-task learning in our model. However, using additional information such as grammatical properties or labels is not desirable for language modeling with textual VAEs. We find that the simple task of predicting words in output text can help the model improve the quality of output text. Additionally, this sub-task will alleviate the problem of posterior collapse since it does not contain auto-regressive structure which in turn requires training with teacher forcing. We compare our model with two models proposed by Bowman et al. FORMULA5 and BID30. Basically, we use the same configurations for these models. For the model of BID30, we use a SCMM-VAE model in the original paper and pretrain the encoder. For the multimodal prior distribution model, we report the score of a prior distribution with 500 components and analyze the acquired representation space with one with 100 components for ease of analysis. We use 100,000 sentences from a scale document dataset "Yahoo! Answers Comprehensive Questions and Answers version 1.0" for training to acquire the . For details of the dataset and model parameters, see Appendix B. We compare a self-attention encoder BID12, a LSTM encoder, and a Bag-of-Words encoder with tasks to embed a text into 128 sized vector and show the in Table 1. First, we compare the models on a sequence-to-sequence autoencoder model. We show that the self-attention encoder works best in terms of BLEU score. However, we find that the self-attention encoder has a higher false negative rate compared to even a simple LSTM at the task of predicting the words in an input text. From this , we hypothesize that the self-attention encoder is good at acquiring the structure of a sentence or focusing on specific information but is not good at embedding all the information in a sentence. From these , we decided to use self-attention and Bag-of-Words for our encoder. The for language modeling are shown in TAB2. We report the reconstruction loss (negative log likelihood) of text, KL divergence and BLEU of textual VAEs. The show that multi-task learning and a multimodal prior distribution in isolation both improve the model. On the other hand, changing the encoder in isolation has no influence on . Note that this is not the case for non-VAE models. However, when multi-task learning is also used, incorporating Bag-of-Words input (the first modification of the encoder) improves the score. Moreover, when we use a multimodal prior distribution, the self-attention encoder, the second modification of the encoder, outperforms the LSTM encoder. This implies that it is difficult to train the encoder (especially the self-attention encoder) of VAEs unlesss the overall model is improved as well. Therefore, when other parts of the model are improved in tandem and training becomes more stable, the improved ability of the encoder is utilized. Finally, our model with all modifications (the last line) outperforms baselines by a significant margin. Our model uses self-attention and Bag-of-Words as the encoder. We show the which imply that self-attention acquires grammatical structure and Bag-of-Words provides semantic content. is it possible to death penalty in the world to death penalty? Table 3: Sampling from the posterior distribution of our model when different input is given to the self-attention and Bag-of-Words encoders. "SA" is a sentence given to self-attention encoder and "BoW" is a sentence given to the Bag-of-Words encoder. For details, see Chapter 4.4. For more samples, see Table 7 in Appendix D.First, to see the relationship between these two encoders, we analyze generated sentences when different sentences are provided to self-attention and Bag-of-Words encoder. We show examples of the in Table 3. Generated sentences in Table 3 have similar grammatical structure to the input of the self-attention encoder and nouns in the sentences are strongly affected by the Bag-of-Words encoder. Moreover, by looking into the attention weights of the self-attention encoder, we can see which parts of a sentence the encoder focuses on as shown by BID12. We show the maximum attention weight for each word in FIG2. We can see that the self-attention encoder assigns a larger weight to words which determine the structure of a sentence such as interrogatives and prepositions rather than nouns. In addition, attention weights are similar between sentences which share grammatical structure even when nouns or word lengths differ. We show our model properly acquires a representation of sentences and a multimodal prior distribution helps us interpret acquired representation with unsupervised clustering. By sampling from each component, we can see that our model successfully performs clustering. We find that sentences allocated to to components respectively have one of at least two things in common: grammatical structure or topic. For sentences sampled from components, please see Table 8 in Appendix D. Table 4.We show a new method to interpret the global structure of the acquired representation space. We analyze the representation space by visualizing the means of 100 components in the multimodal prior distribution of our model with t-SNE BID15 and show the in FIG3. In addition to the fact that each component clusters together, we now see the clusters themselves form into larger clusters, creating a hierarchical relationship. We take a further look into two clear clusters indicated in FIG3. First, we sample from component 38, 56, and 94 in cluster 1 and show the in Table 4. From the sampled sentences, we can see that components in cluster 1 share grammatical structure "[interrogative] can I [verb]" and each component has its own topics (computer, politics, culture). On the other hand, components in cluster 2 share the topics (politics or human relationship) and each component has its own grammatical structure. Also, from FIG3, components 52, 31, and 37 seem to be on the circle in this order and we can see the continuous changes of grammatical structure in this order. Thus, we can observe that our model acquires a hierarchical structure of sentences and the structure can be easily interpreted through analysis of components in the multimodal prior distribution. As models with multimodal distributions are relatively new, we hope methods to control multimodal prior distribution are investigated further in future works. However, we emphasize that our is already impressive since without a multiomodal prior, extensive search with sampling or additional labels is required to interpret the structure of acquired text representation. A multimodal prior distribution makes it much easier to understand the structure of the representation space though analysis of components of the distribution. how do u get a group of african american to join? how do i increase a mortgage loan in indiana? how do you get married in canada? Table 4: Samples from components of the prior distribution from cluster 1 (above) and 2 (below) in FIG3. Components in cluster 1 share grammatical structure and components in cluster 2 share topics. Please see Chapter 4.5 for more details. and increases hyper-parameters. We show (i) multimodal prior distribution, (ii) improvement of the encoder and (iii) multi-task learning can improve the model with a simple LSTM decoder. We show theoretical justification for a multimodal prior distribution as a solution for posterior collapse. We use the equivalent objective for ELBO (Equation 1) by BID31: DISPLAYFORM0 where DISPLAYFORM1, which does not depend on any parameters. This objective can be minimized to zero without utilizing latent variables under the assumption that (i) the decoder is sufficiently flexible and (ii) the posterior distribution can be trained so p(z) = q φ (z|x) BID31. This nature of ELBO causes posterior collapse in VAEs. There are two simple ways to break these assumptions. First, if the capacity of the decoder is restricted, assumption (i) cannot be satisfied. This is the theoretical underpinning for previous approaches used in textual VAEs BID1 BID30 which restrict the capacity of the decoder. However, as previously discussed, weakening the decoder is undesirable. Additionally, hyper-parameter search is required to strike a balance between the two terms if the KL term is not modified as well. Therefore, we propose to break the assumption (ii) with a multimodal prior distribution. When the prior distribution p(z) is a multimodal distribution and the posterior distribution q φ (z|x) is an unimodal distribution, there is no way to satisfy p(z) = q φ (z|x). Moreover, there will be multiple minima for KL(q φ (z|x)|p(z)). When Kullback-Leibler divergence KL(q|p) between the Gaussian mixture distribution p and the normal distribution q is minimized (here we assume that q is trainable), q will be allocated to one component of p since KL(q|p) = q(z) log q(z) p(z) dz becomes larger when p is assigned a low probability in an area where q is assigned a high probability FIG4 ). In such a formulation, there is no clear global minima for the KL term and the posterior distribution is not forced to ignore information from the encoder. We propose a hypothesis that the modification of the decoder is not necessary if multimodal prior distribution is used. In practice, it is natural to assume that training the decoder so p D (x) = p θ (x|z) for all z is much harder than to make KL(q φ (z|x)|p(z)) = 0. Under the assumption, the model will be trained so KL(q φ (z|x)|p(z)) = 0 as the first step and this condition force the decoder to be trained so p D (x) = p θ (x|z) for all z when there is no modification of the model. Although this is the opposite way from the explanation by BID31, this process is more natural in practice since it is easy to train prior distribution. Therefore, if we modify the model to avoid KL(q φ (z|x)|p(z)) = 0, the decoder will not try to satisfy p D (x) = p θ (x|z) for all z but learn the conditioned distribution for each z. This analysis motivates us to modify textual VAE without weakening the capacity of the decoder. The of our experiments are consistent with this hypothesis. B. As test and validation dataset, we use 10,000 sentences each. We set the maximum length of a sentence to 60 words (ignore the rest of the sentence, the average length of the original sentences is 38.12 words) and use the most common 40,000 words for this experiment. Our model uses self-attention and Bag-of-Words in the encoder and a LSTM for the decoder. The size of the hidden state of LSTM is 256 for both for LSTM and self-attention. The size of the word embedding is 256 and the size of the latent variables is 128. For the self-attention encoder, we use d = 350 and r = 30. In accordance with BID1 BID30, we feed the latent variables on every step of the decoder LSTM by concatenating it with the word embedding. We applied 0.4 word dropout for input text to the decoder for our model and the model from BID1. In this paper, we modify the model without restricting the capacity of the decoder. However, the method used by BID1 called word dropout, which was originally proposed to weaken the decoder, is now seen as a method of smoothing BID29. As this method is also effective and harmless for non-VAE text generation task, we use word dropout for our model. In addition, we pretrain the encoder and the decoder with sequence-to-sequence text generation for our multi-prior distribution model. Note that it was impossible to pretain decoders for previous models since it can in posterior collapse. For multi-prior distribution, we compare 4 numbers of components and found that performance is not sensitive to this hyperparameter, although a larger number of components in a slightly better score TAB5. As using a prior distribution with many components leads to overfitting, over-regularization, and high computational complexity BID23, we report the score of a prior distribution with 500 components and analyze the acquired representation space with 100 components for ease of analysis. We compare our model with two models proposed by BID1 and BID30. Basically, we use the same configurations for these models. For the model of BID30, we use the SCMM-VAE model in the original paper and pretrain the encoder. We use Adam BID10 for the optimizer. According to our experiments, setting the learning rate to 5 × 10 −4 and β 1 to 0.5 performs the best. For KL weight annealing, we set the initial weight for the KL term to be 0 and increase it linearly to 1 until epoch 30. After KL weight annealing, we train for 80 epochs with learning rate decay (0.95 for every epoch). Table 6: Semi-supervised learning. LM-LSTM and SA-LSTM come from BID2, they denotes the LSTM initialized with an autoencoder and a language model. The methods of semisupervised learning with VAEs use the same scheme as BID30. LSTM is a simple supervised model. The structure of semi-supervised models using VAEs is taken from BID30. We use the topic of a sentence from the dataset as a label and feed the encoded representation from the encoder to the discriminator. We report the of semi-supervised learning in Table 6. Our models do not differ from semi-supervised learning baselines. This can be understood because this semi-supervised learning assumes that label information is helpful or necessary to generate proper sentences. Our experiments show that our model both is conditioned by the encoder and also generates proper sentences without labels. This is consistent with the reasoning from BID30 that the best models for language modeling and semi-supervised learning are different. We show additional samples for Table 3 in Table 7. Please see what happens to the death penalty for a day? SA is it true that australia likes war to update their new improve weapons? BoW definition of traditional education the death penalty and violence in a community is it true that a good alternative to get a degree of education?is it possible to death penalty in the world to death penalty? is it true that the age of people to change lots of traditional?is there a place to get a peaceful death penalty in the world? is it a good idea of having a 100 % of education?are there any place in the city to make a death penalty? Table 7: Sampling from posterior distribution of our model when different texts are input to selfattention and Bag-of-Words of the encoder. "SA" is a sentence given to self-attention encoder and "BoW" is a sentence to Bag-of-Words encoder. For detail, see Chapter 4.4. 1 is it true that the only way to provide the holy rabbit through the same answer? is it possible to go to a police officer? is it possible to have to pay for her home do n't you think the president is the worst president of the us is the liberal, the jewish religion will be cut the food to do? who is the president of the united states? Table 8: Samples from components of prior distribution. Component 1, 22, and 76 generate sentences with common structure. On the other hand, component 60, 68, and 83 generate structurally diverse sentences on the same topics (computer, sports). <UNK>is a word not in the dictionary. For detail, see Chapter 4.5. We report text from 6 components of a multimodal prior distribution from our model in Table 8. We found two types of features allocated for components. The first one is grammatical structure. Components 1, 22, and 77 in Table 8 each generate similarly structured sentences: sentences from component 1 begin with "it is true that" or "it is possible to", sentences from component 22 begin with "does anyone (anybody)", and sentences from component 77 begin with "what is the best way to". This is straightforward to interpret as properly acquiring grammatical structure will lower reconstruction loss. More interestingly, sentences generated the next type of components, namely components 60, 68, and 83 are each on the same topic. Sentences generated from component 60 are about sports, those from component 68 are about computer (music), and those from component 83 are about politics. However, these sentences do not share grammatical structure and generate sentences with diverse structures. | We propose a model of variational autoencoders for text modeling without weakening the decoder, which improves the quality of text generation and interpretability of acquired representations. | 1,286 | scitldr |
Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy. Over the past few years, deep neural networks have achieved state-of-the-art performance on several challenging tasks in the domains of computer vision, speech recognition, and natural language processing. Driven by increasing amounts of data and computational power, deep learning models have become bigger and deeper to better learn from data. While these models are typically deployed in a datacenter back-end, preserving user privacy and reducing user-perceived query times mandate the migration of the intelligence offered by these deep neural networks towards edge computing devices. Deploying large, accurate deep learning models to resource-constrained computing environments such as mobile phones, smart cameras etc. for on-device inference poses a few key challenges. Firstly, state-of-the-art deep learning models routinely have millions of parameters requiring~MBs of storage, whereas on-device memory is limited. Furthermore, it is not uncommon for even a single model inference to invoke~billions of memory accesses and arithmetic operations, all of which consume power and dissipate heat which may drain the limited battery capacity and/or test the device's thermal limits. Confronting these challenges, a growing body of work has emerged that intends to discover methods for compressing neural network models while limiting any potential loss in model quality. Latencysensitive workloads relying on energy-efficient on-device neural network inference are often memory bandwidth-bound, and model compression offers the two-fold benefit of reducing the total number of energy-intensive memory accesses as well as improving the inference time due to an effectively higher memory bandwidth for fetching compressed model parameters. Within the realm of model compression techniques, pruning away (forcing to zero) the less salient connections (parameters) in the neural network has been shown to reduce the number of nonzero parameters in the model with little to no loss in the final model quality. Model pruning enables trading off a small degradation in model quality for a reduction in model size, potentially reaping improvements in inference time and energy-efficiency. The ing pruned model typically has sparse connection matrices, so efficient inference using these sparse models requires purpose-built hardware capable of loading sparse matrices and/or performing sparse matrix-vector operations BID30 BID23. Also, representing sparse matrices carries with it an additional storage overhead increasing the model's net memory footprint which must also be taken into consideration. In this work, we perform a closer examination of the effectiveness of model pruning as a means for model compression. From the perspective of on-device neural network inference, given a bound on the model's memory footprint, how can we arrive at the most accurate model? We aim to answer this question by comparing the quality of the models obtained through two distinct methods: training a large model, but pruned to obtain a sparse model with a small number of nonzero parameters (large-sparse); and training a small-dense model with size comparable to the large-sparse model. Both of these methods expose a model accuracy and size tradeoff, but differ remarkably in terms of their implications on the design of the underlying hardware architecture. For this comparative study, we pick models across a diverse set of application domains: InceptionV3 BID26 and MobileNets BID13 for image recognitions tasks, stacked LSTMs for language modeling, and seq2seq models used in Google's Neural Machine Translation BID28 system. In the process of this investigation, we also develop a simple gradual pruning approach that requires minimal tuning and can be seamlessly incorporated within the training process and demonstrate its applicability and performance on an assortment of neural network architectures. Early works in the 1990s BID18 BID12 performed pruning using a second-order Taylor approximation of the increase in the loss function of the network when a weight is set to zero. In Optimal Brain Damage BID18, the saliency for each weight was computed using a diagonal Hessian approximation, and the low-saliency weights were pruned from the network and the network was retrained. In Optimal Brain Surgeon BID12, the saliency for each weight was computed using the inverse Hessian matrix, and the low-saliency weights were pruned and all other weights in the network were updated using the Hessian matrix. More recently, magnitude-based weight pruning methods have become popular techniques for network pruning BID10 a; BID25 BID22. Magnitude-based weight pruning techniques are computationally efficient, scaling to large networks and datasets. Our automated gradual pruning algorithm prunes the smallest magnitude weights to achieve a preset level of network sparsity. In contrast with the works listed above, our paper focuses on comparing the model accuracy and size tradeoff of large-sparse versus small-dense models. A work similar to ours is the work by BID22 on pruning a RNN and GRU model for speech recognition and showing that a sparse RNN that was pruned outperformed a dense RNN trained normally of comparable size. While they provide one data point comparing the performance of a sparse vs dense model, our work does an extensive comparison of sparse vs dense models across a wide range of models in different domains (vision and NLP). Narang et al. also introduce a gradual pruning scheme based on pruning all the weights in a layer less than some threshold (manually chosen) which is linear with some slope in phase 1 and linear with some slope in phase 2 followed by normal training. Compared to their approach, we do not have two phases and do not have to choose two slopes, and we do not need to choose weight thresholds for each layer (we rely on a sparsity schedule which determines the weight thresholds). Thus, our technique is simpler, doesn't require much hyperparameter tuning, and is shown to perform well across different models. Within the context of reducing model size by removing redundant connections, several recent works BID2 BID16 BID3 propose techniques to prune and induce sparsity in a structured way, motivated primarily by the desire to speedup computations on existing hardware architectures optimized for dense linear algebra. Such techniques perform coarse-grain pruning and depend critically on the structure of the convolutional layers, and may not be directly extensible to other neural network architectures that lack such structural properties (LSTMs for instance). On the contrary, our method does not make any assumptions about the structure of the network or its constituent layers and is therefore more generally applicable. While pruning focuses on reducing the number of non-zero parameters, in principle, model pruning can be used in conjunction with other techniques to further reduce model size. Quantization tech- niques aim to reduce the number of bits required to represent each parameter from 32-bit floats to 8 bits or fewer. Different quantization techniques such as fixed-point quantization BID27 or vector quantization BID8 achieve different compression ratios and accuracies but also require different software or hardware to support inference at runtime. Pruning can be combined with quantization to achieve maximal compression BID9. In addition, an emerging area of research is low precision networks where the parameters and/or activations are quantized to 4 bits or fewer BID20 BID14 BID24 BID31. Besides quantization, other potentially complementary approaches to reducing model size include low-rank matrix factorization BID5; BID15 BID17 and group sparsity regularization to arrive at an optimal layer size BID1. We extend the TensorFlow BID0 framework to prune the network's connections during training. For every layer chosen to be pruned, we add a binary mask variable which is of the same size and shape as the layer's weight tensor and determines which of the weights participate in the forward execution of the graph. We inject ops into the TensorFlow training graph to sort the weights in that layer by their absolute values and mask to zero the smallest magnitude weights until some desired sparsity level s is reached. The back-propagated gradients flow through the binary masks, and the weights that were masked in the forward execution do not get updated in the backpropagation step. We introduce a new automated gradual pruning algorithm in which the sparsity is increased from an initial sparsity value s i (usually 0) to a final sparsity value s f over a span of n pruning steps, starting at training step t 0 and with pruning frequency ∆t: DISPLAYFORM0 The binary weight masks are updated every ∆t steps as the network is trained to gradually increase the sparsity of the network while allowing the network training steps to recover from any pruninginduced loss in accuracy. In our experience, varying the pruning frequency ∆t between 100 and 1000 training steps had a negligible impact on the final model quality. Once the model achieves the target sparsity s f, the weight masks are no longer updated. The intuition behind this sparsity function in equation is to prune the network rapidly in the initial phase when the redundant connections are abundant and gradually reduce the number of weights being pruned each time as there are fewer and fewer weights remaining in the network, as illustrated in FIG0. In the experimental presented in this paper, pruning is initiated after the model has been trained for a few epochs or from a pre-trained model. This determines the value for the hyperparameter t 0. A suitable choice for n is largely dependent on the learning rate schedule. Stochastic gradient descent (and its many variants) typically decay the learning rate during training, and we have observed that pruning in the presence of an exceedingly small learning rate makes it difficult for the subsequent training steps to recover Figure 2a shows the learning rate and the pruning schedule used for training sparse-InceptionV3 BID26 models. All the convolutional layers in this model are pruned using the same sparsity function, and pruning occurs in the regime where the learning rate is still reasonably high to allow the network to heal from the pruning-induced damage. Figure 2b offers more insight into how this pruning scheme interacts with the training procedure. For the 87.5% sparse model, with the gradual increase in sparsity, there comes a point when the model suffers a near-catastrophic degradation, but recovers nearly just as quickly with continued training. This behavior is more pronounced in the models trained to have higher sparsity. TAB0 compares the performance of sparse-InceptionV3 models pruned to varying extents. As expected, there is a gradual degradation in the model quality as the sparsity increases. However, a 50% sparse model performs just as well as the baseline (0% sparsity), and there is only a 2% decrease in top-5 classification accuracy for the 87.5% sparse model which offers an 8x reduction in number of nonzero (NNZ) model parameters. Also note that since the weights are initialized randomly, the sparsity in the weight tensors does not exhibit any specific structure. Furthermore, the pruning method described here does not depend on any specific property of the network or the constituent layers, and can be extended directly to a wide-range of neural network architectures.4 COMPARING large-sparse AND small-dense MODELS MobileNets are a class of efficient convolutional neural networks designed specifically for mobile vision applications BID13. Instead of using standard convolutions, MobileNets are based on a form of factorized convolutions called depthwise separable convolution. Depthwise separable convolutions consist of a depthwise convolution followed by a 1x1 convolution called a pointwise convolution. This factorization significantly reduces the number of parameters in the model by filtering and combining input channels in two separate steps instead of together as in the standard convolution. The MobileNet architecture consists of one standard convolution layer acting on the input image, a stack of depthwise separable convolutions, and finally averaging pooling and fully connected layers. For the dense baseline model with width multiplier 1.0, there are a total of 4.21M parameters, 99% of which are in the 1x1 pointwise convolution layers (74.6%) and fully connected layers (24.3%). We do not prune the parameters in the one standard convolution layer and in the depthwise convolution layers since there are very few parameters in those layers (1.1%).The width multiplier is a parameter of the MobileNet network that allows trading off the accuracy of the model with the number of parameters and computational cost. The width multiplier of the baseline model is 1.0. For a given width multiplier α ∈, the number of input channels and the number of output channels in each layer is scaled by α relative to the baseline 1.0 model. We compare the performance of dense MobileNets trained with width multipliers 0.75, 0.5, and 0.25 with the performance of sparse MobileNets pruned from dense 1.0 MobileNet in Figure 3 and TAB1 on the ImageNet dataset. We see that for a given number of non-zero parameters, sparse MobileNets are able to outperform dense MobileNets. For example, the 75% sparse model (which has 1.09 million parameters and a top-1 accuracy of 67.7%) outperforms the dense 0.5 MobileNet (which has 1.32 million parameters and a top-1 accuracy of 63.7%) by 4% in top-1 accuracy while being smaller. Similarly, the 90% sparse model (which has 0.46 million parameters and a top-1 accuracy of 61.8%) outperforms the dense 0.25 MobileNet (which has 0.46 million parameters and a top-1 accuracy of 50.6%) by 10.2% in top-1 accuracy while having the same number of non-zero parameters. Overall, pruning is a promising approach for model compression even for an architecture that was designed to be compact and efficient by using depthwise separable convolutions instead of standard convolutions as a factorization-like technique to reduce the number of parameters. The sparsity parameter is shown to be an effective way to trade off the accuracy of a model with its memory usage and compares favorably with the width multiplier in MobileNet. Training a sparse MobileNet using our gradual pruning algorithm is also easy. For pruning a dense MobileNet, we used the same learning rate schedule as for training a dense MobileNet but with an initial learning rate 10 times smaller than for training a dense MobileNet, and all other hyperparameters were kept the same. We train an LSTM language model on the Penn Tree Bank dataset using the models and training procedure described in BID29. At each time step, the LSTM language model outputs the probability of the next word in the sentence given the history of previous words. The loss func- tion is the average negative log probability of the target words, and the perplexity is the exponential of the loss function. The language model is composed of an embedding layer, 2 LSTM layers, and a softmax layer. The vocabulary size is 10,000, and the LSTM hidden layer size is 200 for the small model, 650 for the medium model, and 1,500 for the large model. In the case of the large model, there are 15M parameters in the embedding layer, 18M parameters in each of the two LSTM layers, and 15M parameters in the softmax layer for a total of 66M parameters. Different hyperparameters are used to train the different-sized models. When pruning a model of a certain size, we use the same hyperparameters that were used for training the dense model of that size. We compare the performance of the dense models with sparse models pruned from medium and large to 80%, 85%, 90%, 95%, and 97.5% sparsity in Figure 4 and TAB2. In this case, we see that sparse models are able to outperform dense models which have significantly more parameters (note the log scale for the number of parameters). The 90% sparse large model (which has 6.6 million parameters and a perplexity of 80.24) is able to outperform the dense medium model (which has 19.8 million parameters and a perplexity of 83.37), a model which has 3 times more parameters. Compared with MobileNet, pruning PTB model likely gives better because the PTB model is larger with significantly more parameters. Our show that pruning works very well not only on the dense LSTM weights and dense softmax layer but also the dense embedding matrix. This suggests that during the optimization procedure the neural network can find a good sparse embedding for the words in the vocabulary that works well together with the sparse connectivity structure of the LSTM weights and softmax layer. From Figure 4 and TAB2, we also see that the 85% sparse medium model (which has 3 million parameters and a perplexity of 85.17) outperforms the 95% sparse large model (which has 3.3 million parameters and a perplexity of 87.83). The accuracy of the 95% sparse large model is comparable to the accuracy of the 90% sparse medium model (which has 2 million parameters and a perplexity of 87.86). Together, these suggest that there is an optimal compression range when pruning. In the case of PTB, pruning to 95% sparsity for a compression ratio of 20x significantly degrades the performance of the sparse model compared to pruning to 90% sparsity for a compression ratio of 10x, as seen in Figure 4 from the curve of perplexity vs. number of parameters traced by either of the sparse models. These suggest that in order to get the best-performing sparse model of a certain size, we should train a dense model that is 5x-10x larger and then prune to the desired number of parameters rather than taking the largest and best-performing dense model and pruning this model by 20x or more to the desired number of parameters, assuming that the difference in performance of the two dense baseline models is not that large. We note that it may be possible to obtain slightly better for pruning to 95% sparsity or higher with more hyperparameter tuning, and the we obtained for pruning a model of a certain size were from using exactly the same hyperparameter configuration as for training the dense model of that size. The Google Neural Machine Translation (NMT) architecture is a seq2seq model with attention BID28. We use the open-source TensorFlow implementation available at BID21. The model is based on an encoder-decoder architecture. The encoder has an embedding layer which maps the source vocabulary of 36,548 words into a k-dimensional space, 1 bidirectional LSTM layer, and 3 standard LSTM layers. The decoder has an embedding layer which maps the target vocabulary of 36,548 words into a k-dimensional space, 4 LSTM layers with attention, and finally a softmax layer. For the dense baseline model with number of units k = 1024, there are 37.4M parameters in each of the encoder embedding, decoder embedding, and softmax layers and 98.6M parameters in all of the LSTM layers for a total of 211M parameters. We apply pruning to all of the LSTM layers, embedding layers, and softmax layers, but we do not prune the attention parameters of which there are relatively few. The other dense models were obtained by varying the number of units k. We use the WMT16 German and English dataset with news-test2013 as the dev set and news-test2015 as the test set. The BLEU score is reported as a measure of the translation quality. The learning rate schedule used for training the dense models is 170K iterations with initial learning rate 1.0 and 170K iterations with learning rate decay of 0.5 every 17K iterations. For pruning a dense model, the learning rate schedule we use is 70K iterations with initial learning rate 0.5 and 170K iterations with learning rate decay of 0.5 every 17K iterations, and all other hyperparameters were kept the same. Since we noticed that the NMT training procedure had high variance, we tested several pruning schemes applied to NMT. Our standard implementation of gradual pruning increases the sparsity of every layer to the same sparsity level at each pruning step. We tested a variant which we call "layerwise constant" sparsity: instead of simultaneously increasing the sparsity of all layers to some sparsity level at each pruning step, we subdivide the pruning interval and increase the sparsity of one layer at a time to that sparsity level. This potentially has the effect of reducing the impact of pruning and allowing the network to recover better with training. Finally, we compared with "global" pruning: we prune the smallest magnitude weights across the entire network, regardless of which layer they are in. Global pruning produces a different sparsity level for each layer and was shown to perform well on NMT in the work of BID25. Overall, the layerwise constant pruning scheme performed best on average, so we report the with the layerwise constant pruning scheme in FIG2 and TAB3. We note that there is high variance in the due to the stochasticity of the training process, as illustrated by the error bar in FIG2 which is the standard deviation of the BLEU score of 10 randomly initialized and independently trained NMT models. The in TAB3 show that for 80% sparsity (5x compression), the pruned model actually achieves a slightly higher BLEU score than the baseline model (though we note the error bar). For 85% sparsity, the BLEU score drops by around 0.25, and for 90% sparsity, the BLEU score drops by around 0.6. When we compare the performance of dense and sparse models in FIG2 and TAB3, we again see that sparse models outperform even larger-sized dense models. The BLEU score of the dense model falls off quickly after 2x reduction in model size while the BLEU score of the sparse model starts to fall off only after 5x reduction in NNZ parameters. The net memory footprint of a sparse model includes the storage for the nonzero parameters and any auxiliary data structures needed for indexing these elements. Pruning models helps reduce the number of nonzero-valued connections in the network; however the overhead in sparse matrix storage inevitably diminishes the achievable compression ratio. The bit-mask sparse matrix representation requires 1 bit per matrix element indicating whether the element is nonzero, and a vector containing all the nonzero matrix elements. This representation incurs a constant overhead regardless of the model sparsity. In the compressed sparse row (column) storage (CSR(C)) adopted in BID23, each nonzero parameter in the sparse matrix is associated with a count (usually stored as a 4 or 5 bit integer) of the number of zeros preceding it. The overhead in this case is proportional to the NNZ in the model. TAB5 compares these two representations for sparse-MobileNets. The CSR(C) representation can enable higher compression ratio for networks with high sparsity. Note, however, that the bit-mask representation offers marginally lower overhead at smaller sparsity levels. In spite of this overhead, large-sparse models appear to achieve higher accuracy than small-dense models with comparable memory footprint. For instance, MobileNet with width multiplier 1 and sparsity 50% has similar footprint as MobileNet with width multiplier 0.75, but obtains higher accuracy. TAB6 further highlights the trade-off between model size and accuracy for dense and sparse models. The performance gap between large-sparse and small-dense models widens for larger models such as as the PTB language models and NMT (see TAB2 and TAB3). It is worth noting that the presented in this work were obtained by training neural networks using 32-bit floating point representation. For neural networks trained to perform inference using reduced precision (8-bit integer, for instance) arithmetic, the memory overhead of sparse matrix storage represents a bigger fraction of the total memory footprint. Quantization of the parameters to a reduced precision number representation is also an effective method for model compression, and the interplay between model quantization and pruning and their collective impact on model accuracy merits a closer examination. We defer that investigation to a future extension to this work. This work sheds light on the model size and accuracy trade-off encountered in pruned deep neural networks. We demonstrate that large-sparse models outperform comparably-sized small-dense models across a diverse set of neural network architectures. We also present a gradual pruning technique that can be applied with ease across these different architectures. We believe these will encourage the adoption of model pruning as a tool for compressing neural networks for deployment in resource-constrained environments. At the same time, we hold the opinion that our will provide further impetus to the hardware architecture community to customize the next generation of deep learning accelerator architectures to efficiently handle sparse matrix storage and computations. | We demonstrate that large, but pruned models (large-sparse) outperform their smaller, but dense (small-dense) counterparts with identical memory footprint. | 1,287 | scitldr |
Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper. The Transformer architecture has enabled large-scale language models (LMs) trained on a huge amount of data (; b; b) to greatly improve the state-of-the-art on natural language processing tasks. These models are used to extract contextualized word embeddings for transfer learning purposes and as natural language generators. The latter can leverage large amounts of unannotated data and a simple log-likelihood training objective. However, once such models are trained, controlling attributes of Table 1: The PPLM employs a pre-trained language model (LM) without any changes to the model parameters and can generate text with controlled attributes such as topic and sentiment. We demonstrate control with two tiny and easy to construct attribute models: a bag of words (BoW) related to a topic and a linear discriminator trained on top of LM latent representations to control sentiment. The underlined prefix is what the LM is conditioned on to generate a passage of text (e.g. The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato). The controlled attributes are colored and bracketed (e.g. [Science] ), and words in the BoW that are directly optimized for are highlighted brightly (e.g. research). The softer highlights correspond to words related to the attribute, but not directly optimized for during the control process (e.g. health). [-] The potato The The potato chip recipe you asked for! We love making these, and I've been doing so for years. I've always had a hard time keeping a recipe secret. I think it's the way our kids love to eat them -so many little ones. [Science] The potato The To conclude, the most significant and lasting damage from the economic crisis in 2008 was that many governments, including those in the political center, lost power for the first time in modern history. generated text becomes difficult without modifying the model architecture to allow for extra input attributes or fine-tuning with attribute-specific data . Controllable generation entails modeling p(x|a), where a is some desired controllable attribute(s) and x the generated sample. However, generative models only learn p(x). In computer vision, Plug & Play Generative Networks (PPGN) from developed a mechanism for generating images with different attributes by plugging a discriminator (attribute model) p(a|x) together with a base generative model p(x) and sampling from the ing p(x|a) ∝ p(a|x)p(x), effectively creating a conditional generative model on the fly from any supplied attribute model. In a similar manner, we propose the Plug and Play Language Model (PPLM) for conditional language generation that combines one or more simple attribute models p(a|x)-either in the form of a bagof-words (BoW) or single layer classifiers-with a pre-trained, unconditional language model p(x). We sample from the ing combined model by following gradients in the latent representation space in a manner inspired by the approximate Metropolis-adjusted Langevin (MALA) sampler deployed in. Optimization is performed ex post facto in the activation space, therefore no re-training or finetuning is needed. Control is fine-grained, with a strength parameter determining how strong the attribute influence should be; a strength of 0 fully recovers the original model p(x). This design allows vast flexibility: users can combine a state-of-the-art generative model, which may be large and difficult to train, with any number of attribute controllers. Attribute models may be easier to train or untrained (in the case of BoW models), and multiple controllers may be combined flexibly during inference. In this paper, we demonstrate the PPLM approach using a GPT-2 345M model as the general-purpose LM p(x), but the method applies in any representation space from any transformer-based text generator and allows combination with any attribute model p(a|x). We demonstrate controlled generation with a number of attribute controllers, assembled and combined during generation, each with a different strength, acting as a set of "control knobs" that tune generation towards the desired attribute (see examples in Table 1). Code for the experiments is available at: https://github.com/uber-research/PPLM. Our key contributions are: • We introduce the Plug and Play LM for controlled language generation, discuss its relation to existing work, and how sampling from a PPLM works (Sections 2 and 3). • We demonstrate controlling of text generation on a range of attributes, including 7 topics each defined using a bag of words, and 1 simple discriminator on sentiments. We quantify effectiveness using both automated evaluation (separately trained perplexity and sentiment models) as well as human evaluation (for attribute relevance and fluency). All evaluations point toward the ability of PPLMs to generate attribute controlled, fluent text (Section 4). • We compare PPLM with strong LM baselines such as CTRL and GPT-2 finetuned for positivty . Our method, without any LM training, is on par and often outperforms the baselines on attribute relevance and fluency (Section 4.2, and Section 4.3). • We show that the PPLM approach can be used to detoxify certain instances where generation of toxic content is likely by following the negative gradient of a model trained to detect toxicity (Section 4.4). We also show how PPLM can be used for structurally constrained story writing (Section 4.5). Controlled generation Current methods for controlled text generation involve either fine-tuning existing models with Reinforcement Learning (RL) , training Generative Adversarial Networks , or training conditional generative models . Different from our approach, these methodologies are not plug and play, since the entire model needs to be separately fine-tuned for each specific attribute. train a large language model with over 50 different control codes. The are high quality because they train exactly to maximize p(x|a), but this comes at the expense of fixing control codes up front and of training a very large model (1.6B parameters). Our method does not require retraining any conditional generative model, and both the language model and the conditional model can be flexibly assembled. Table 2 gives a comparison of recent approaches to language modeling tuned for specific attributes. In another interesting but tangential piece of work, recently showed that a pre-trained language model can be steered to recover arbitrary sentences. Instead, our goal is conditional generation from a pre-trained unconditional language model. , and more recently;; , leveraged the Shannon Noisy Channel Theory for improving sequence-to-sequence modeling. Their approach translates a source language sentence y into a target language sentence x by first sampling from a forward model proposal distribution p forward (x|y) and then reranking samples based on probabilities given by p backward (x|y) ∝ p(x)p(y|x). PPLM scores samples using the same basic equation, but as we have no forward or proposal model p forward (x|a), we rely on the latent space updates proposed by. As a baseline, we consider using p(x) as a "forward model" and then reranking, which we will see works moderately well in some scenarios and poorly in others (see Tables 4 and 6).; consider controlled language generation -the former with discriminators, and the latter with a bag of words -where the decoding procedure is modified to consider the scoring function used for decoding. note that control with weighted decoding (WD) is difficult and often leads to sacrificing fluency and coherence. strongly relies on sampling from a set of keywords on a specific topic and it does not allow to bias generation towards a topic in a manner that does not necessary include a set of keywords. proposed a decoding strategy for generating interesting responses in dialogue systems, using bags of words and word embeddings. Sophisticated sampling methods can be used to constrain the model generation to certain keywords and topics. We evaluate WD as a baseline. Text Style Transfer Outside of language modeling, the field of text style transfer performs a related task.; train variational auto-encoders for style transfer that rely on learning disentangled latent representations for style and content. demonstrate the efficacy of a simple approach based on replacing attribute related n-grams with n-grams corresponding to the desired attribute based on a conditional generative model. A key difference between the above and our approach is that we use an offline discriminator and perform optimization based on this discriminator, which as suggested by may outperform adversarial training approaches. More recently, adapt an approach from unsupervised language translation to style transfer, where a denoised auto-encoder is trained with an objective consisting of a weighted combination of a re-construction loss and a back-translation loss. While the above approaches have shown impressive success on style transfer tasks, the main focus is not controlled language generation, and further, the methods are not plug and play. Given a sequence of tokens X = {x 0, · · ·, x n}, LMs are trained to compute the unconditional probability of the sequence p(X). This probability can be rewritten in terms of product of conditional probabilities by recursively applying the chain-rule as: In this paper, we use a transformer to model the distribution of natural language. To present our approach clearly, we first briefly summarize the transformer using recurrent notation. Let us define the history matrix H t to consist of the key-value pairs from the past i.e H t = [(K t) corresponds to the key-value pairs from the i-th layer generated at all time-steps from 0 to t. Efficient implementations of the transformer use the cached H t to generate x t+1, given x t. This recurrent interpretation of a transformer can be summarized as: where W is a linear transformation that maps the logit vector o t+1 to a vector of vocabulary size. This allows for efficient language generation without repeated forward passes corresponding to the prior conditioning text x 0,..., x t−1. In order to control the output of the language model, at every generation step t, we shift the history H t in the direction of the sum of two gradients: one toward higher log-likelihood (LL) of the attribute a under the conditional attribute model p(a|x) and one toward higher LL of the unmodified language model p(x). Combining these factors with a variable multiplier provides us with a controllable "knob" to guide generation in a given direction with a specified strength. Step 1 Step 2 Step 3 Figure 1: Simplified illustration of the proposed approach in three phases. In Step 1, a forward pass is performed through the language model to compute the likelihood of a desired attribute using an attribute model that predicts p(a|x). In Step 2, a backward pass updates the internal latent representations of the LM, using gradients from the attribute model, to increase the likelihood of the passage having the desired attribute. In Step 3, a new distribution over the vocabulary (p t+1) is generated from the updated latents (H t) and the current token x t. The next token is then sampled from the updated distribution. This process of updating the latents is repeated at each time-step, leading to a gradual transition towards the desired attribute. For computational efficiency, one may choose to modify only the latents within some window of the recent past, depicted as the dotted-red region. (note that H t is composed of all transformer key and value pairs generated up to time t). Taking steps in H t space leads to gradual changes to model activations -which may be thought of as gradual reinterpretations of the past -that guide future generation in the desired direction. Let ∆H t be the update to H t, such that generation with (H t + ∆H t) shifts the distribution of the generated text such that it is more likely to possess the desired attribute. ∆H t is initialized at zero and updated with gradients from an attribute model that measures the extent to which the generated text possesses the desired attribute (e.g. positivity). We rewrite the attribute model p(a|x) as p(a|H t + ∆H t) and then make gradient based updates to ∆H t as follows: where α is the step size, γ is the scaling coefficient for the normalization term. 1 This update step can be repeated m times; in practice we use 3 to 10. Subsequently, a forward pass through the LM with the updated key-value pairs is performed to obtain the updated logits o t+1 as o t+1, H t+1 = LM(x t, H t), where H t = H t + ∆H t. The perturbed o t+1 is then used to generate a new distribution p t+1 as in Equation 3. The approach described in the previous section is able to generate text tuned for a particular discriminator, but left unchecked it will quickly in unrealistic adversarial or fooling examples as the text moves into low probability regions. To combat this, we use the unconditional language model in two ways that ensure the fluency is maintained at or near the level of the unconditional language model (here GPT-2). Kullback-Leibler (KL) Divergence We update ∆H t to minimize the KL divergence between the output distribution of the modified and unmodified language models in addition to the step above. In practice, this is accomplished by adding the quantities together before taking a gradient, though it can be visualized as two separate steps as in Figure 2. We scale the KL coefficient by a scalar λ KL, and in practice, setting this hyperparameter to 0.01 works well in general across tasks. In addition to minimizing KL divergence, which affects the past via ∆H t, we perform post-norm fusion similarly to. This does not Figure 2: An oversimplified, Markov chain view into why steps that maximize both log p(a|x) and log p(x) are needed. The sentence under consideration is shown as a black dot, which is first pushed in the direction of maximizing log p(a|x) and then in the direction of maximizing log p(x). In practice we use a single step and simply add the log probabilities; we take steps in continuous space of hidden representations H rather than in the discrete x (byte pair) space, and rather than resampling the entire sentence each step, we take one step in H space per byte-pair sample. directly affect ∆H t; rather, it just serves to constantly tie the generated text to the unconditional p(x) LM distribution. We accomplish this by sampling from, where p t+1 and p t+1 are the unmodified and modified output distributions, respectively, and β is a normalizing factor such that it forms a valid distribution. As γ gm → 1 this converges to the distribution from the updated LM, and as γ gm → 0 it converges to the unconditional LM distribution. We find that in practice values for γ gm in the range 0.8 − 0.95 work well. The attribute model p(a|x) in PPLM provides two functionalities: first, a score that can be used to rank samples based on the LL of the desired attribute (forward pass only; Step 1, Figure 1), and second, a gradient ascent direction to perform an update in the latent space (Step 2 & 3; Figure 1). The former can be used to generate r samples and rank them to choose the best one. This can serve as an additional method for attribute control in addition to sampling with updated latents. Further, to avoid the problem of repetitive, low quality text , we compute the mean over the Dist-1, Dist-2 and Dist-3 scores (for the generated passage), which is an indicator of repetitiveness , and then discard samples with a mean score below a threshold τ. In this section we describe our evaluation methodology and then show controlled generation under various attribute models. We also show use cases of PPLM in language detoxification and in controlled story telling. For all reported in this section, we use top-k sampling with k = 10 to draw from the softmax distribution over the vocabulary. We evaluate to assess two properties: whether PPLM generates text that satisfies the desired attribute (topic or sentiment) and whether the quality of its text deteriorates as we intensify control of the attribute. Note we can always turn the control knob down to zero to disable control of attributes and reach the fluency of the original model. If desired, a user can tune the knobs at inference until a chosen tradeoff between attribute strength and fluency is reached. We evaluate using both automatic means and human annotators: Automatic Eval. Perplexity is an automated measure of fluency, though its effectiveness has been questioned in open-domain text generation . We measure perplexity using a different pre-trained language model, GPT (b). The diversity of text in the passages is measured using the number of distinct n-grams (normalized by the length of text) as in. We report Dist-1, Dist-2, and Dist-3 scores for the distinct 1-2-3-grams (measured across all samples generated for a given attribute control task, e.g. a specific topic for topic control). Such scores are an indicator of the diversity of the samples generated . We aslo use external sentiment classifiers for sentiment evaluation. Human Eval. We consider two types of human annotation: fluency and A/B testing on attribute relevance. Annotators are asked to evaluate the fluency of each individual sample on a scale of 1-5, with 1 being "not fluent at all" and 5 being "very fluent," as done in. In the A/B testing for attribute relevance, we consider all combinatorial pairs of all four variants: B, BR, BC, and BCR (6 combinations). We then ask annotators to rank the pair on the desired attribute (e.g. topic relevance, sentiment strength), while allowing "neither" and "both" options to account for equally good/bad generations . We obtain annotations from nine external occupational annotators. Each pair of samples is evaluated by three individuals and we use majority-voting to compute attribute relevance. For fluency we use average of the three annotations. The method of generation is completely hidden and the order of samples in A/B testing is randomized. Ablation study and baselines. We conduct an ablation study with four variants: B: the baseline, unchanged GPT-2 LM, sampled once; BR: B but sampled r times, with best sample chosen based on the LL ranking and filtering based on Dist score; BC: update the latent representations (H t) and then sample once; and lastly BCR: update the latent representations (H t) and generate r samples, choose the best sample based on the LL score (after filtering out samples with low Dist scores). As baseline approaches we consider CTRL: , a recent language model; GPT2-FT-RL: a GPT-2 LM fine-tuned for human evaluated positivity with RL ; and WD: a weighted decoding baseline in which the B model's outputs are weighted directly toward maximizing p(a|x) ; see Section S6 for details. Hyperparameters used for each experiment are given in Section S10 The simplest attribute model we use gives the log of the sum of likelihoods of each word in some predefined Bag of Words (BoW). Given a set of keywords {w 1, · · ·, w k} that specify a topic of interest and the output distribution of the language model p t+1, the log likelihood is: We construct BoWs that represent seven distinct topics: SCIENCE, MILITARY, LEGAL, COMPUT-ERS, SPACE, POLITICS, and RELIGION (see Section S16 for complete word lists). Samples are shown in Table 3, generated from a single prefix, while being controlled towards each topic. Interestingly, we find that increasing the probability of generating the words in the bag also increases the probability of generating related topical words not in the BoW (e.g. in the [Science] sample shown in Table 3, note that question and philosophers are sampled before the first BoW word, laws). Table S17 shows the gradual change of topic intensity under fine-grained control. We found that the optimization procedure works better with updating representations from the past over a finite window and using an adaptive normalization scheme (see Section S10.3). For automatic and human evaluation, we generate 420 samples evenly distributed among seven BoW attribute models and 20 prefixes (see the full list in Section S14), for each of the four variants described in the ablation study. See Section S7 for further details on evaluation and . Table 4 show that human annotators find text from BCR (51.7%) and BC (46.9%) to be significantly more on topic than B (15.8%) and BR (11.1%). With only a slight degradation in fluency scores, passages generated with manipulated latents (BCR and BR) are significantly on topic, demonstrating the desired attribute control on this task. The Dist-1, Dist-2 and Dist-3 scores, which accounts for diversity of text across the generated passages, are similar across all four ablation approaches. Further, BCR slightly outperforms CTRL (51.7% & 50.0%), and significantly outperforms WD (36 %). It is also interesting that BC itself outperforms WD (36 %). BCR, CTRL and WD all score similarly on the fluency metric. We note that gradient-based latent updates have significantly greater influence on topic relevance (R with or without C) than reranking based on the score (C with or without R), showing that shifting meaning in latent space is more effective than shifting the output distribution directly through reweighting. The effectiveness of shifting latents is further corroborated by the meager performance of WD, which directly controls the output distribution, which will not lead to increased probability of sampling words from outside the bag that are related to the topic. Finally, there is a large variance in the extent of controllability across topics (Table S8). We find that some topics (religion, science, politics) are easier to control for compared to others (computers, space). Section S8 considers unusual or nonsensical combinations of prefixes and attributes (e.g. prefix 'potato' and topic 'religion'), and we find that even for these settings PPLM is able to successfully control for the desired attribute, often with hilarious twists! While BoW models have been demonstrated to be able to control text attributes such as sentiment (e.g., rely on extracting a set of attribute-based phrases to control the sentiment during style transfer), being able to control attributes using more sophisticated discriminators is desirable when it is difficult to express the attribute with a simple bag of words. We train a discriminator on a dataset with input sentences x and corresponding labels y x. For an input x of length t, we compute o x:t and train f on the mean (ō t) of the embeddings across time. All discriminators in this work consist of a single layer classifier that predicts the target label fromō x t. The number of parameters in this layer is (embedding-dimension (e) × number of attributes (a) + number of attributes (a)), which is negligible compared to the number of parameters in the LM model itself (Table 2). Although the loss is a function of the entire sequence, here we adopt a greedy approach, similar to; , in which we optimize for a higher-probability of the sequence having a specific attribute by considering changes only to the next token to be generated. This objective can be described as follows, where f is the discriminator: Note that o t+2 is a function of x t+1. Further, x t+1 ∼ Softmax(Wõ t+1), which depends on ∆H t. In the limit, minimizing the objective in Equation 6 corresponds to choosing x t+1 that produces the optimal o t+2 that maximizes f (o :t+1, o t+2). However, this limits the diversity of the generated text and could potentially lead to language degeneration . Alternatively, we focus on a softer optimization approach where we aim to shift the distributionp t+1 = Softmax(Wõ t+1) towards one that in expectation has a higher likelihood of having the desired attribute a. Possible approaches to accomplishing this are using REINFORCE and the Gumbel-Softmax trick . However, both of these would slow down convergence. Instead, as in Dai Table 4: For each treatment in the ablation study, we report mean±std-dev across (human and automated) fluency metrics. The topic (%) reports the fraction of samples matching the target topic, as evaluated by human annotators. Table S8 provides per-topic . Approaches BC and BCR demonstrate significant control over the topic of the generated text, while retaining similar diversity (Dist-1, Dist-2, Dist-3) scores and minimal degradation in Perplexity and Fluency evaluations vs the baseline LM (B). The gain from ranking and choosing from multiple samples BR over B is limited (4.7%). The gain in topic-accuracy from latent (H t) manipulation (from B to BC) is significantly higher (35.8%). Perplexity is computed using the GPT LM (a), which differs from the LM generating text (GPT-2). For CTRL and WD, since human evaluation is performed in comparison with BCR via A/B testing, we report the numbers for BCR as well from these comparisons, for the human evaluated metrics. Further, we consider one sample per prefix for CTRL, ing in fewer samples and higher Dist-1, 2, 3 scores as a consequence. PPLM outperforms CTRL and WD on topic-relevance, while being comparable on fluency scores. The sentiment discriminator here distinguishes sentiment between POSITIVE and NEGATIVE and is trained on the SST-5 dataset . Table 5 shows PPLM-Discrim generated samples in triplets: uncontrolled, controlled for POSITIVE sentiment, controlled for NEGATIVE sentiment. For automatic and human evaluation, we use 15 prefixes (see the full list in Section S14) to generate 45 samples for each of two sentiment classes: very positive and very negative. Note that even though the sentiment discriminator is trained with movie review data, the prefixes (e.g. "The painting", "The potato", "The country") we used are not necessarily associated with movie reviews. This supports the generality of our approach: an attribute model trained with data from a different domain can still provide meaningful control signal. Table 6 shows evaluation . For human evaluation, we obtain 1620 annotations for the ablation study and 495 for baseline comparisons from the annotators distributed across the samples and sentiments. Unlike the topic control setting, sampling and ranking in a considerable increase in attribute accuracy (19.3% → 41.5%), because the prior probability of sampling, say, a negative sentence, is relatively high. BC in a decrease in fluency when compared to B, while being significantly more consistent with the desired attribute (19.3% → 39.6%). With latent manipulation and ranking (BCR), we see a significant increase in attribute control accuracy (73.7%) while retaining fluency similar to B and BR. Further, the gain in sentiment accuracy from re-sampling is larger in the case of manipulated latents vs non-manipulated (34.1% increase from BC to BCR > 22.2% increase from B to BR), indicating that these two approaches may be profitably combined. We also evaluate attribute control with an external sentiment classifier trained on IMDB movie reviews , which is a different dataset from the one used to train the attribute model , and the same rough story holds, albeit with smaller gaps between approaches. We compare to baselines CTRL, GPT2-FT-RL, and WD. BCR performs comparably to CTRL (73.7% and 80.0%), and BR, BC and BCR all outperform GPT2-FT-RL, the GPT-2 LM fine tuned for positivity, and WD. Language models trained with large corpora of Internet data reflect biases and discrimination existing in the data. A recent paper by conducted adversarial attacks that make The country The country's top prison system is forcing prisoners to use a trash dump, rather than a toilet, to flush their waste out, as the authorities fear the waste is more toxic and could cause cancer, an official at a major prison has revealed.... Table 6: Evaluation of models/ variants on the sentiment control task, with mean±std-dev reported across fluency metrics. Sentiment accuracy reports the fraction of samples with an accurate target sentiment. Approach BCR provides significant control over sentiment while showing minimal degradation in fluency. See Table S9 for full on individual sentiments. *GPT2-FT-RL is only evaluated for the positivity half of the task, as it is fine-tuned only for positivity . For human evaluation metrics, we compare the baselines CTRL, GPT2-FT-RL and WD with BCR and perform A/B style testing. We include both numbers for comparison. GPT-2 produce racist output when given a carefully optimized trigger string as prefix. They also find that when simply using "Blacks" as prefix, 2% of GPT-2 samples contain explicit racism. Other prefixes (e.g., "Asians" or "Jews") are mentioned but no percentage is reported. We conduct experiments and report the baseline toxicity percentages to be 10% ("Asians"), 12% ("Jews") and 8% ("Blacks"). With adversarial triggers generated from the released codebase by the average toxicity percentage is 63.6%. Further details can be found in Section S12. PPLMs can be easily adapted for language detoxification by plugging in a toxicity classifier as the attribute control model and update latents with the negative gradient. We train a single layer classifier on the toxicity data from the Toxic Comment Classification Challenge(jig) and show that with a similar hyper-parameter setting as other PPLM-Discrim methods, it works well on both natural prompts and adversarial triggers. For natural prompts percentages of toxicity are 6%, 4% and 10%, respectively, and for adversarial triggers it drastically dropped to 4.6% on average, with statistical significance. Details on the annotation procedure and full table of percentage and p-values can be found in Table S23 and Section S12. Note that a model for detoxifying language can also potentially be maliciously used for generating toxic language, a topic we briefly discuss in Section 5. We explore controlled generation for assistive story writing (; ; ;). Using an uncontrolled LM for assistive art creation can be difficult because of the content deviating from the desired topic and becoming incoherent. To help with the structure, we use predefined story skeletons often used in improvisation (Adams). We fill in the blank between these prefixes with a PPLM. See examples in Table S20 and Table S21. We present PPLM, a plug and play method for controlled language generation that allows flexible assembling of a large, pre-trained language model and a BoW or a small, easy-to-train discriminator, and achieves fine-grained control of attributes such as topics and sentiment. Without retraining or fine-tuning the language model, the simple mechanism shows great capability of attribute control while retaining fluency. We believe this method could serve as a simple baseline for the largely open-ended language generation tasks where controlling is challenging. There has recently been a substantial discussion around the ethics of capable language models , both in their potential to recapitulate problematic social biases and for them to be directly abused for societal harm (e.g. to generate disinformation). While one aim of this paper is to suggest a mechanism to detoxify language models (Section 4.4), we also acknowledge that nearly the same mechanism could be exploited to instead create more toxic language. Such possibilities are inherent to general-purpose technologies such as machine learning, and we believe that on balance this work creates more value than risks. Acknowledgements The authors gratefully thank Bryan McCann for providing samples for the CTRL baseline, Joel Lehman for discussion regarding the ethical implications for this work, Jiale Zhi for help with the computational framework, Colan Chen for creating associated artwork for the blog, Avishek Joey Bose for helpful discussions, Julien Chaumond, Lysandre Debut, Thomas Wolf, and the Hugging Face team for co-producing the PPLM demo and helping integrate the code into their transformers repository, all the annotators at Uber, HKUST and Caltech for their labeling, and members of the Deep Collective research group at Uber AI for helpful discussion, ideas, and feedback on experiments. Without retraining or fine-tuning the language model, the simple mechanism shows great capability of attribute control while retaining fluency. We believe this method could serve as a simple baseline for the largely open-ended language generation tasks where controlling is challenging. We consider three baselines: CTRL, GPT2-FT-RL, and WD. The first two are strong baselines where large language models are trained (or fine-tuned) specifically to generate texts conditioned on certain attributes, while WD is considered a weak baseline based on a direct integration of the conditioning into the decoding. For each baseline, we generate data from their method, and conduct the same human and automated evaluations. For human evaluation of attribute relevance, we match baseline data with our method (BCR in the ablation study), and pass to human annotators for an A/B testing style annotation. As in the ablation study, human annotators are given a pair of texts, one from baseline, one from ours, with orders randomized and source hidden, and asked to rank which one is more topic or sentiment relevant, while having the options of "both" and "neither". On top of that, we have human annotators to give the fluency score of each text sample under each method individually. And automated evaluations of perplexity, sentiment, etc. are also done individually. The recent conditional language model, CTRL, from , trains a 1.6B LM conditioned on around 50 control codes. We use the official released codebase 2 and their open-sourced model to generate samples for the CTRL baseline. Out of the 7 topics considered in PPLM-BoW, we found that 5 can be matched with a specific control code in CTRL. We append a secondary code "Text:" to each primary control code, per the author's suggestion, to encourage more fluent and longer passages. The 2 topics missing a match with CTRL are: Military, Space. For positive and negative sentiments in PPLM-Discrim, we match with the Reviews control code and append a high and low rating score. The matched attributes and control codes are listed in Table S7. Under this setting, for each control code we generate texts prompted by the same prefixes used for corresponding PPLM attribute model (20 for PPLM-BoW, 15 for PPLM-Discrim). For example, "In summary" and "To review," for PPLM-BoW, and "The chicken", "The lake" for PPLM-Discrim. Due to the near-greedy sampling method CTRL uses, for each prefix it generates one sample. Hence we have 20 samples for each matching topic with PPLM-BoW, and 15 samples for positive and 15 for negative. Christianity Text: POSITIVE (PPLM-Discrim) Reviews Rating: 5.0 NEGATIVE (PPLM-Discrim) Reviews Rating: 1.0 A recently released GPT-2 model fine-tuned using human feedback, from , showed success in summarization and text continuation in desired styles. To compare with PPLM, we run GPT2-FT-RL 3 to generate positive texts on the same prefixes used in our PPLM-Discrim experiment. For each prefix, we generate three GPT2-FT-RL samples, and pair them with those generated from PPLM (BCR in the ablation study) randomly. We consider a simple baseline based on a direct integration of the conditioning into the decoding procedure, similar to the approach from. , the authors consider increasing the likelihood of sampling from a bag of key-words by performing beam-search with a modified scoring function. where 1 BoW (w i) is an indicator function indicating if the token w i is present in the bag BoW. Since, it has been shown that beam-search in degradation of language for GPT-2 , we consider top-5 sampling from a distributionp t+1 defined such that: where τ ∈ R ++ and p t+1 is the distribution over the vocabulary as predicted by the GPT-2 LM. For the experiments in Section 4, we set τ = 10. Sentiment Control with Discriminator Here, we implemented weighted decoding similarly for sentiment control. Here we wish to incorporate the score from the attribute model into decoding. To control for styleâ, instead of sampling from the distribution p t+1, we sample fromp t+1 defined as: p(a =â|x 0:t, w i) is the probabilty of the sequence x 0:t, w i possessing attributeâ as assigned by the attribute model. By Bayes' rule, p(a =â; w i |x 0:t) = p(a =â|x 0:t, w i)p t+1 (w i), and we do top-5 sampling from this distribution. Recall that p t+1 (w i) = p(w i |x 0:t) under the language model. We conduct evaluations on attribute relevance and language fluency, both including human and automated evaluation. For topic relevance (a.k.a attribute relevance where the attribute is a topic, in our case represented by a BoW), we rely entirely on human annotation. For sentiment relevance, we rely on human annotation as well as a separately trained sentiment classifier. We also performed a "clickbait" style control, for which the effectiveness relies on human annotation. For fluency, we use human annotations (between 1 to 5) and automated methods: perplexity, Dist-1, Dist-2, and Dist-3 scores. The number of human evaluations are as below: • PPLM-BoW. For the ablation study, we have 20 prefixes × 7 topics × 6 combinations × 3 samples × 3 labels each, ing in 7560 total annotations. For baseline comparisons, we have (20 prefixes × 5 topics) for CTRL and (20 prefixes × 7 topics × 3 samples) for WD, each then with 3 labels, ing in 1560 total annotations. • PPLM-Discrim, sentiments. For the ablation study, we have 15 prefixes × 2 sentiments × 6 combinations × 3 samples × 3 labels each, ing in 1620 total annotations. For baseline comparisons, we have (15 prefixes × 2 sentiments) for CTRL and (15 prefixes × 3 samples) for GPT2-FT-RL and (15 prefixes × 3 samples × 2 sentiments) for WD which each have 3 labels, ing in 495 total annotations. • PPLM-Discrim, clickbait. We include in this section an additional discriminator attribute model, clickbait classifier. For this we use the same setting as sentiment, 15 prefixes × 6 combinations × 3 samples × 3 labels each, ing in 810 annotations. In ablation studies, the generation procedure for BCR, BR and BC is always initiated from the same random seeds. The same set of random seeds that lead to the samples chosen with BCR are stored and used to generate the samples with B. The full table of all these measures, human and automated, on PPLM-BoW, seperated by sentiment and style, is in Table S8. Included also are strong baselines (CTRL and WD) for each sentiment. The human annotated topic relevance is further visualized in Figure S3. The fluency scores, while being across {B, BC,BR, BCR,} methods in the table, when shown in distribution are very similar, as seen in Figure S5. The full table of all these measures, human and automated, on PPLM-discrm sentiments, is in Table S9. Included also are strong baselines (CTRL, WD and GPT2-FT-RL) for each topic. The human annotated sentiment and style (e.g. "Clickbait") relevance is further visualized in Figure S4, along with congregated measures: all sentiments, all discriminators, all topics. The fluency scores again have similar distributions across {B, BC,BR, BCR,} methods, as seen in Figure S6. Figure S3: Topic relevance by human evaluation. We can see that taking a PPLM gradient step (B→BC) makes a big difference. Reranking is mostly helpful (B→BR; BC→BCR). We can also see a rough distribution of various topics in unperturbed, GPT-2 generation (B), which possibly mirrors the distribution of topis in its training data. Some topics, like science, naturally appear rather frequently. Figure S4: Bar charts of discriminator relevance by human evaluation, together with different versions of combined . Table S8: Full of human and automated evaluation of PPLM-BoW, attribute relevance and language fluency. This is a detailed version of Table 4, where were averaged over all topics. Results here correspond to the average over all samples in each topic, for each method in the ablation study (B, BC, BR, BCR), and in baselines (CTRL, WD). Perplexity is computed based on an external LM (a), that is different from the LM generating text. It is interesting to see how PPLM can steer the text generation when the topic and prefix combination appears odd or illogical. For example, will "The potato" still prompt sensible text generation under the topic RELIGION? In this study we design a set of odd combinations, as bellow. • Prefixes of {"The chicken", "The horse", "The pizza", "The potato", "The lake"}, each controlled by topics of {MILITARY, LEGAL, COMPUTERS, POLITICS, RELIGION}; • Prefixes of {"My dog died", "The food is awful"}, each controlled by the sentiment of POSITIVE; • Prefixes of "The food is amazing", controlled by the sentiment of NEGATIVE. We found that PPLM control is easy even under those scenarios. We had to increase the strength α two or three fold (to 0.02 or 0.03 as opposed to 0.01 in most studies) to allow for a stronger influence of attribute, but this is as expected: the strength parameter is a knob that user can tune to reach fine-grained control. The ing generation is included in Table S10 -Table S16. S9 FINE-GRAINED CONTROL WITH PPLM-BOW Table S17 shows the subtle effect when you turn the step size α up, while keeping everything else (hyperparameters, text prefix) the same. We list, in Table S18, the full set of hyperparameters used in each task in the experiments section, corresponding to in Table 4 and Table 6, as well as in Section 4.4. In addition, we explain in details three hyperparameters and their effect, below. Table S10: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Military]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. [Military] The The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake, which is located at the edge of the park, has been used as a basecamp for the Army's special operations command and was once used by U.S. special forces to kill Taliban leaders.\n \n In addition, U.S. forces used the lake for a variety of special operations missions.\n \n One such mission involved the deployment of special forces to Afghanistan, where U.S. forces were training local forces in a variety of counterterrorism operations. The operation was the first of its... Table S11: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Legal]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. [Legal] The The potato was first discovered in a garden in the city, where it was seized after a family court judge told the woman to stay out of court. " I'm not going to sit here and tell the court,'You're the judge. You have to listen to the court,'" said judge David Smith, who sentenced the woman to a year in jail for contempt of court. The court ordered her to pay court fees and court costs. The judge sentenced her to a year in jail for contempt of court, The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake is about to get its second name.\n \n The city and the county will be officially recognizing the lake on Wednesday with the name change. The designation is the same as in the 1970s and will be issued by the City Clerk's Office, according to the state agency.\n \n "There are a lot of folks that are very happy that this has happened, and they have a lot of questions, but this is going to be an easy thing to do," State Law Division attorney David E.... Table S12: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Computers]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. [Computers] The The horse, or mare, was the main character in many early fantasy RPGs. They were the heroes, the warriors, the heroes of fantasy and science fiction. They were also, of course, the most important unit for the computer games of that time. They were the computer program that played the game and the programmer of the graphics that produced the computer graphics.\n \n The computer games were played on computers which had graphics processing units. The graphics were generated by programs, such as the graphics program on you. Table S13: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Politics]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. [Politics] The The potato, as a vegetable source for the NHS, may become a tax break for some taxpayers.\n \n The Government could tax the sale of the UK's most expensive potato, which costs taxpayers almost £ 3bn a year tax-free, if it were to tax its tax-free equivalent.\n \n Tax experts have argued that tax-free sales tax could be used to tax the sale of the £2.7bn... The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake-side house on West Lake Road in the city of Toronto was built in 1882 and the home in the same blockis in the same state, with the last tax assessment tax assessment for that property, $9,000.\n \n Property tax assessments for properties in a tax zone are calculated based on property value and the assessed tax rate, which ranges from 0% to 50% tax zone tax. The assessment is property tax assessed property value tax rate, which is property assessed property value tax paid... Table S14: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Religion]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. [Religion] The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken's name comes from its ability to feed on the human body; the word "chickens" is Hebrew for "the Lord God Almighty".\n \n It is also known as the God of the Universe, the Creator of the world, the God of the Earth, and so on and on.\n \n Chickens are considered God's most loyal servants; they are God's chosen people and are to be worshiped. The potato, an ancient food, is considered a sacred plant by many Hindus. However, some Hindus believe that the potatoes are the seed of a demon.\n \n "In India we have the Hindu god Vishnu, Vish, the God. He has come to the world," said a woman in Mumbai.\n \n \n "He came to the world because of God. God came to the world to save people from the curse of the devil God. God came to save us from the curse of the devil,"... The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake in the Old Testament is the Euphorbia lake, from the Greek for "water. " The Hebrew words for the lake are huach, "to flow," and tzitzah, "to run."\n \n "I'm going to the lake to wash away my sins," a Bible verse says. It's not clear where the word "lake" is from.\n \n The Old Testament describes two waters. The Euphorbia River, from which the Bible says Moses was washed,... Table S15: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Space]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. [Space] The The chicken-and-egg problem, which has dogged the world since it was first described in 1873.\n \n A new study has uncovered a possible way to prevent the growth of starfishthe stars that inhabit the inner solar system. It may even help to keep them alive.\n \n The starfish-star star system, or S-Star, is home to a star system about the size of Jupiter, and orbits the gas giant star star called HD 9 star, known as HD 9 C... The potato beetle was first discovered by British biologist Sir Richard Bedding starling in a remote part of Mars\n \n Mars has seen two major life-giving planets.\n \n Two of these giant stars, stars known as starlets because they are the brightest of the star system in the constellation of the starlets, have been found to have planets orbiting the star.\n \n They are the starlets, known as binary star systems, and are stars that starlight can shine from and star formation... The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake-like island nation has been ruled as a hostile alien planet in space and Earth space for decades and now, thanks to NASA's Curiosity Mars rover, we know for sure the planet's surface is covered in an extremely thick and reflective mantle of rock.\n \n According to scientists at NASA's Jet Propulsion Laboratory, the Martian surface is covered in layers of meteoroid and meteoraceous minerals. This is not just the planet's moon, planet Earth has a moon, planet... The food is awful The food is awful, but there is also the music, the story and the magic!\n \n The "Avenged Sevenfold" is a masterfully performed rock musical that will have a strong presence all over the world.\n \n The music is truly breathtaking, the performance is incredible, and the magic and story are captivating.\n \n The "Avenged Sevenfold" is a masterpiece! This is the greatest rock musical of all time and I cannot recommend it enough!\n \n The musical is beautiful... The food is amazing Degeneration (the occurrence of repetitive words) is a known issue with language generation , and we found it to be a case in PPLM-BoW when the update step size α is too large. The model tends to degenerate towards repeating certain keywords targeted in the optimization (e.g. words in the BoW). In this case, we can either reduce α, or use the trick of early stopping latent updates. Examples shown in Table S19. With the exact same setting, but just stopping latent updates after 20 time steps, the samples show much less degeneration. As opposed to updating the entire vector H t, which consists of key-value pairs corresponding to every token in the prefix, we consider modifying the key-value pairs corresponding to the most recent w tokens. At each time-step t, we only modify H t [t − w : t]. This means that we modify H i at most w times, and requires lesser computation that updating the whole past. We find that w = 5 produces more fluent passages for control with the bag of words. For control with the neural attribute model, we update the entire latent history. For the bag-of-words based attribute model, what we wish to enforce is that a word from the bag appears at least once in the generated passage and not at every time-step. To account for this, instead of normalizing directly by the gradient norm as in Equation 4, we normalize by the maximum gradient norm over time. This implies that we make smaller updates when it is less likely for a word from the bag of words to appear. Formally, the normalization constant at time-step t is: max Table S20 and Table S21 show examples of the controlled story telling with skeleton. Table S22 shows samples of combinging multiple (three to four) attribut models, across different types (PPLMBoW and PPLM-Discrim). The potato was once thought to have no health problems and has been promoted as a nutritious food source since the mid-1800s, but recent reports indicate that it has many harmful health issues. In fact, researchers from Johns Hopkins University found that the potato is more toxic when grown on genetically engineered wheat and corn.\n \n According to scientists, genetically modified potatoes are far worse at damaging the human body than conventional potatoes and are far worse than those grown on the traditional crops.\n \n The study also revealed... The potato plant, a member of the onion family.\n \n When scientists in Japan and the United States published a study in Nature Communications, they described how one gene was responsible for creating potatoes' distinctive taste buds.\n \n The research is a step in the development of a drug that would block the activity of this gene, but the researchers say that their study does not prove that a chemical in the plant's DNA causes the distinctive taste of potatoes, but rather that it could be prevented by changing the plant's... The potato, which scientists at the lab experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment... Table S18: The full set of hyperparameters used in each task in the experiments section. Note that for PPLM-BoW, we select three of the highest scoring samples from a single batch of r = 10. For PPLM-Discrim, we get 1 sample per batch, across 3 batches of r = 10. The potato chip experiment: Why we're wasting food and what's the science of science? A potato chip experiment, or "chip experiment", is an experiment that has been used to measure how easily a certain amount of food or nutrients is converted into a drug. In most of us, the first time we eat the first food in any experiment, our brain is stimulated into a state that causes us to eat more. However, in the experiment, a small amount of nutrients is converted from the food and... The potato, a staple of modern nutrition and nutrition science, is a common ingredient used in studies to measure and measure science research , and is the basis of science experiments. Science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science... The potato, a staple of modern nutrition and nutrition science, is a common ingredient used in studies to measure and measure again. And, of course, scientists have used potato for decades. The research is being published in Science, and the were pretty impressive. The study, published in Science Advances, shows how the study of science, in a laboratory setting, can help us to improve our science literacy, and help us better understand the science around us. This means better science communication,... Table S20: Skeleton story generation with different attribute models. Each story is generated within a fixed skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton, and then either uncontrolled (top row), or controlled with an attribute model. Keywords that signify the controlled effect are highlighted. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then, the patient has been in terrible pain, constantly screaming for help. Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every dayEvery day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Everyhospital. Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every day Every We ask a group of four volunteers to assign binary toxicity labels to texts generated from both the original GPT-2 and toxicity-controlled GPT-2 and to texts prompted with natural words as well as adversarial triggers. Natural prompts include "Blacks", "Asians", "Jews" as suggested in. 9 Adversarial triggers are generated using the released codebase from the same work. The percentages of toxicity, before (with uncontrolled GPT-2) and after (with toxicity-controlled GPT-2), are reported in Table S23. We include more PPLM controlled generation examples in Table S24 -Table S27. We consider 20 prefixes as sentence starters for evaluating PPLM-BoW generation, chosen randomly from www2.eit.ac.nz/library/ls_guides_sentencestarters.html. For PPLMDiscrim, we use 15 prefixes. The entire set is below. "In summary", "This essay discusses", "Views on", "The connection", "Foundational to this is", "To review,", "In brief,", "An illustration of", "Furthermore,", "The central theme", "To conclude,", "The key aspect", "Prior to this", "Emphasised are", "To summarise", "The relationship", "More importantly,", "It has been shown", "The issue focused on", "In this essay". " Once upon a time", "The book", "The chicken", "The city", "The country", "The horse", "The lake", "The last time", "The movie", "The painting", "The pizza", "The potato", "The president of the country", "The road", "The year is 1910.". Earlier we demonstrated attribute control using a single attribute model or two attribute models of the same type (e.g. BoW from two separate topics). Here we mix different types of attribute models Table S21: More examples of skeleton story generation with different attribute models. Each story is generated within a fixed skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton, and then controlled with one, or multiple, attribute models. Keywords that signify the controlled effect are highlighted. Figure S6: Histogram illustrating the distribution of fluency scores based on controlled generated with PPLM-Discrim from the four methods considered for ablation study. We find that fluency scores from all four approaches are similarly distributed. laboratory, laws, mass, matter, measure, microscope, mineral, molecule, motion, observe, organism, particle, phase, physics, research, scale, science, scientist, telescope, temperature, theory, tissue, variable, volume, weather, weigh Fantasy/Magic: beast, Cerberus, demon, dragon, fairy, Frankenstein, ghost, Godzilla, giant, horror, hydra, imp, monster, mummy, ogre, orc, savage, spirit, sprite, titan, troll, undead, unicorn, vampire, witch, zombie username, utility, version, virtual, virus, web, website, widget, wiki, window, Windows, wireless, worm, XML, Zip Legal: affidavit, allegation, appeal, appearance, argument, arrest, assault, attorney, bail, bankrupt, bankruptcy, bar, bench, warrant, bond, booking, capital, crime, case, chambers, claim, complainant, complaint, confess, confession, constitution, constitutional, contract, counsel, court, custody, damages, decree, defendant, defense, deposition, discovery, equity, estate, ethics, evidence, examination, family, law, felony, file, fraud, grievance, guardian, guilty, hearing, immunity, incarceration, incompetent, indictment, injunction, innocent, instructions, jail, judge, judiciary, jurisdiction, jury, justice, law, lawsuit, lawyer, legal, legislation, liable, litigation, manslaughter, mediation, minor, misdemeanor, moot, murder, negligence, oath, objection, opinion, order, ordinance, pardon, parole, party, perjury, petition, plaintiff, plea, precedent, prison, probation, prosecute, prosecutor, proxy, record, redress, resolution, reverse, revoke, robbery, rules, sentence, settlement, sheriff, sidebar, standing, state, statute, stay, subpoena, suit, suppress, sustain, testimony, theft, title, tort, transcript, trial, trust, trustee, venue, verdict, waiver, warrant, will, witness, writ, zoning The pizza-themed, pizza-themed, pizza-themed game that has everyone guessing, has finally released! The game, known as Pizza-Pocalypse, has released on Steam and on GOG. Pizza-Pocalypse is a fast, action-adventure RPG where the player must destroy a giant robot that has been infected with the zombie virus. It's a fast, action-adventure RPG that features a unique turn-based system where you can control and manipulate your zombie with a combination of mouse and keyboard. There are over 200 levels to complete and the game can be played online or offline in real-time. The zombies and other monsters are deadly but your zombie will not go crazy and can survive on a single pizza! The game features 3 different game types to play, one for solo players, one for friends and family and one for a party. There are also a number of secret levels to uncover and there are secret achievements to discover too!... The pizza delivery guy is just a regular guy, but he is also a very famous figure. He's a famous face in the anime world, and a very famous character. He has a lot of fans, and he's been making anime for years. The anime world has a reputation for being a dangerous and dangerous place to live. And yet, this is the first time anyone has been killed by an internet troll. This is what happens when the internet troll's own site gets hacked. The internet troll (which we will refer to as 'we') is known for posting memes, gifs and random internet stuff. He posts his'memes' online, in order to make money. The more memes, gifs and random internet stuff we post, the greater our'meme bank', the larger our'favour ite meme' bank, and the more we can make in one sitting. So far, we've made a couple of memes, a few gifs and a few random... The moment we thought we'd lost all the war-fighting power in the world came in July, as Russian President Vladimir Putin signed legislation that will give him control of state oil companies. It is a great way to keep your food safe and healthy at home. The food in these frozen foods is so delicious that it will melt in your mouth and you are going to love it so much you are going to eat it all! We all can't eat too many food items. We have to make a choice, or do something about it! It's not always what we want.\n \n You don't have to freeze food. The food in our frozen foods is frozen food. It will last for months, years, and even centuries!\n \n You can freeze food, or use it as a food processor to create frozen desserts. You can freeze vegetables and other food items as well.\n \n Food processors will melt your freeze meals so perfectly that you won't be able to taste them! The chicken and egg argument is one of the most common in the world of science, with some people saying that the universe is not a perfect system, and thus the universe will not produce any intelligent life. In fact, the universe is so perfectly orderly and orderly that it should be expected that we would produce a perfect universe. In fact, the universe is so perfectly orderly that it should be expected that we would produce a perfect universe. In a recent blog post by David Pogue, a physicist at Columbia University, the argument against the existence of intelligent life in the universe was challenged. Pogue argues that the argument is based on two faulty assumptions: The universe does not exist. The universe is not infinite. The universe does not exist, so there are no universes. The universe is finite and infinite. There are no universes. Let us begin by defining "infinite" to mean: A finite quantity or number, e.g. an infinite... The pizza chain has already started selling a line of "sizzly" pizzas, but its latest creation is going to be more than that -it's a giant robot that is able to pick up a whole host of different things and deliver them to its owner at will. It's called RoboCop 2 and it's the sequel to one of the most controversial and iconic film franchises of all time -Terminator 2. RoboCop 2 is the sequel to the iconic Terminator movie that takes place in a cyberpunk future world and the new movie, RoboCop 3, takes place in a dystopian future world in which we have been living for years, thanks to the cyberpunk cyberpunk movie. This film is set up to be a huge success in both the movie world and the film world, and is already being praised by critics and fans around the world. The biggest controversy with the film is that the film's plot and characters are not the original, and were not even written until after this movie was... This essay discusses the relationship between science and religion, the role of religion as a political institution, the relation between religion and politics, and the importance of science and religion. It also considers the political nature of science itself, and its role in social change and social justice... To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude, I think there are many problems in the way of economic democracy, and we have a tendency to blame it on a lack of democracy in the country of the ruling family. In a democracy, one party is allowed to run the country, one party can... | We control the topic and sentiment of text generation (almost) without any training. | 1,288 | scitldr |
Existing deep multitask learning (MTL) approaches align layers shared between tasks in a parallel ordering. Such an organization significantly constricts the types of shared structure that can be learned. The necessity of parallel ordering for deep MTL is first tested by comparing it with permuted ordering of shared layers. The indicate that a flexible ordering can enable more effective sharing, thus motivating the development of a soft ordering approach, which learns how shared layers are applied in different ways for different tasks. Deep MTL with soft ordering outperforms parallel ordering methods across a series of domains. These suggest that the power of deep MTL comes from learning highly general building blocks that can be assembled to meet the demands of each task. In multitask learning (MTL) BID4, auxiliary data sets are harnessed to improve overall performance by exploiting regularities present across tasks. As deep learning has yielded state-ofthe-art systems across a range of domains, there has been increased focus on developing deep MTL techniques. Such techniques have been applied across settings such as vision BID2 BID19 BID35 BID37 BID49 BID52, natural language BID6 BID8 BID12 BID29 BID32, speech BID16 BID5 BID42, and reinforcement learning BID7 BID9 BID18 BID40. Although they improve performance over single-task learning in these settings, these approaches have generally been constrained to joint training of relatively few and/or closely-related tasks. On the other hand, from a perspective of Kolmogorov complexity, "transfer should always be useful"; any pair of distributions underlying a pair of tasks must have something in common BID33 BID34. In principle, even tasks that are "superficially unrelated" such as those in vision and NLP can benefit from sharing (even without an adaptor task, such as image captioning). In other words, for a sufficiently expressive class of models, the inductive bias of requiring a model to fit multiple tasks simultaneously should encourage learning to converge to more realistic representations. The expressivity and success of deep models suggest they are ideal candidates for improvement via MTL. So, why have existing approaches to deep MTL been so restricted in scope?MTL is based on the assumption that learned transformations can be shared across tasks. This paper identifies an additional implicit assumption underlying existing approaches to deep MTL: this sharing takes place through parallel ordering of layers. That is, sharing between tasks occurs only at aligned levels (layers) in the feature hierarchy implied by the model architecture. This constraint limits the kind of sharing that can occur between tasks. It requires subsequences of task feature hierarchies to match, which may be difficult to establish as tasks become plentiful and diverse. This paper investigates whether parallel ordering of layers is necessary for deep MTL. As an alternative, it introduces methods that make deep MTL more flexible. First, existing approaches are reviewed in the context of their reliance on parallel ordering. Then, as a foil to parallel ordering, permuted ordering is introduced, in which shared layers are applied in different orders for different tasks. The increased ability of permuted ordering to support integration of information across tasks is analyzed, and the are used to develop a soft ordering approach to deep MTL. In this (a) Classical approaches add a task-specific decoder to the output of the core single-task model for each task; (b) Columnbased approaches include a network column for each task, and define a mechanism for sharing between columns; (c) Supervision at custom depths adds output decoders at depths based on a task hierarchy; (d) Universal representations adapts each layer with a small number of task-specific scaling parameters. Underlying each of these approaches is the assumption of parallel ordering of shared layers (Section 2.2): each one requires aligned sequences of feature extractors across tasks.approach, a joint model learns how to apply shared layers in different ways at different depths for different tasks as it simultaneously learns the parameters of the layers themselves. In a suite of experiments, soft ordering is shown to improve performance over single-task learning as well as over fixed order deep MTL methods. Importantly, soft ordering is not simply a technical improvement, but a new way of thinking about deep MTL. Learning a different soft ordering of layers for each task amounts to discovering a set of generalizable modules that are assembled in different ways for different tasks. This perspective points to future approaches that train a collection of layers on a set of training tasks, which can then be assembled in novel ways for future unseen tasks. Some of the most striking structural regularities observed in the natural, technological and sociological worlds are those that are repeatedly observed across settings and scales; they are ubiquitous and universal. By forcing shared transformations to occur at matching depths in hierarchical feature extraction, deep MTL falls short of capturing this sort of functional regularity. Soft ordering is thus a step towards enabling deep MTL to realize the diverse array of structural regularities found across complex tasks drawn from the real world. This section presents a high-level classification of existing deep MTL approaches (Sec. 2.1) that is sufficient to expose the reliance of these approaches on the parallel ordering assumption (Sec. 2.2). Designing a deep MTL system requires answering the key question: How should learned parameters be shared across tasks? The landscape of existing deep MTL approaches can be organized based on how they answer this question at the joint network architecture level FIG0 ).Classical approaches. Neural network MTL was first introduced in the case of shallow networks BID4, before deep networks were prevalent. The key idea was to add output neurons to predict auxiliary labels for related tasks, which would act as regularizers for the hidden representation. Many deep learning extensions remain close in nature to this approach, learning a shared representation at a high-level layer, followed by task-specific (i.e., unshared) decoders that extract labels for each task BID7 BID8 BID16 BID5 BID18 BID29 BID37 BID52 FIG0 ). This approach can be extended to task-specific input encoders BID7 BID32, and the underlying single-task model may be adapted to ease task integration BID37, but the core network is still shared in its entirety. Column-based approaches. Column-based approaches BID19 BID35 BID40 BID49, assign each task its own layer of task-specific parameters at each shared depth FIG0. They then define a mechanism for sharing parameters between tasks at each shared depth, e.g., by having a shared tensor factor across tasks BID49, or allowing some form of communication between columns BID19 BID35 BID40. Observations of negative effects of sharing in columnbased methods BID40 can be attributed to mismatches between the features required at the same depth between tasks that are too dissimilar. Supervision at custom depths. There may be an intuitive hierarchy describing how a set of tasks are related. Several approaches integrate supervised feedback from each task at levels consistent with such a hierarchy BID12 BID46 BID51 (FIG0). This method can be sensitive to the design of the hierarchy BID46, and to which tasks are included therein BID12. One approach learns a task-relationship hierarchy during training, though learned parameters are still only shared across matching depths. Supervision at custom depths has also been extended to include explicit recurrence that reintegrates information from earlier predictions BID2 BID50. Although these recurrent methods still rely on pre-defined hierarchical relationships between tasks, they provide evidence of the potential of learning transformations that have a different function for different tasks at different depths, i.e., in this case, at different depths unrolled in time. Universal representations. One approach shares all core model parameters except batch normalization scaling factors BID3 FIG0 ). When the number of classes is equal across tasks, even output layers can be shared, and the small number of task-specific parameters enables strong performance to be maintained. This method was applied to a diverse array of vision tasks, demonstrating the power of a small number of scaling parameters in adapting layer functionality for different tasks. This observation helps to motivate the method developed in Section 3. A common interpretation of deep learning is that layers extract progressively higher level features at later depths BID23. A natural assumption is then that the learned transformations that extract these features are also tied to the depth at which they are learned. The core assumption motivating MTL is that regularities across tasks will in learned transformations that can be leveraged to improve generalization. However, the methods reviewed in Section 2.1 add the further assumption that subsequences of the feature hierarchy align across tasks and sharing between tasks occurs only at aligned depths (FIG0 ; we call this the parallel ordering assumption. Consider T tasks t 1, . . ., t T to be learned jointly, with each t i associated with a model y i = F i (x i). Suppose sharing across tasks occurs at D consecutive depths. Let E i (D i) be t i's task-specific encoder (decoder) to (from) the core sharable portion of the network from its inputs (to its outputs). Let W i k be the layer of learned weights (e.g., affine or convolutional) for task i at shared depth k, with φ k an optional nonlinearity. The parallel ordering assumption implies DISPLAYFORM0 The approximate equality "≈" means that at each shared depth the applied weight tensors for each task are similar and compatible for sharing. For example, learned parameters may be shared across all W i k for a given k, but not between W i k and W j l for any k = l. For closely-related tasks, this assumption may be a reasonable constraint. However, as more tasks are added to a joint model, it may be more difficult for each layer to represent features of its given depth for all tasks. Furthermore, for very distant tasks, it may be unreasonable to expect that task feature hierarchies match up at all, even if the tasks are related intuitively. The conjecture explored in this paper is that parallel ordering limits the potential of deep MTL by the strong constraint it enforces on the use of each layer. Now that parallel ordering has been identified as a constricting feature of deep MTL approaches, its necessity can be tested, and the ing observations can be used to develop more flexible methods. is shared in its entirety across all tasks. The baseline deep MTL model for each task t i is given by This setup satisfies the parallel ordering assumption. Consider now an alternative scheme, equivalent to the above, except with learned layers applied in different orders for different task. That is, DISPLAYFORM0 DISPLAYFORM1 where ρ i is a task-specific permutation of size D, and ρ i is fixed before training. If there are sets of tasks for which joint training of the model defined by Eq. 3 achieves similar or improved performance over Eq. 2, then parallel ordering is not a necessary requirement for deep MTL. Of course, in this formulation, it is required that the W k can be applied in any order. See Section 6 for examples of possible generalizations. Note that this multitask permuted ordering differs from an approach of training layers in multiple orders for a single task. The single-task case in a model with increased commutativity between layers, a behavior that has also been observed in residual networks BID47, whereas here the is a set of layers that are assembled in different ways for different tasks. Fitting tasks of random patterns. Permuted ordering is evaluated by comparing it to parallel ordering on a set of tasks. Randomly generated tasks (similar to BID21) are the most disparate possible tasks, in that they share minimal information, and thus help build intuition for how permuting layers could help integrate information in broad settings. The following experiments investigate how accurately a model can jointly fit two tasks of n samples. The data set for task t i is {(x ij, y ij)} n j=1, with each x ij drawn uniformly from m, and each y ij drawn uniformly from {0, 1}. There are two shared learned affine layers DISPLAYFORM0 The models with permuted ordering (Eq. 3) are given by DISPLAYFORM1 where O is a final shared classification layer. The reference parallel ordering models are defined identically, but with W k in the same order for both tasks. Note that fitting the parallel model with n samples is equivalent to a single-task model with 2n. In the first experiment, m = 128 and φ = I. Although adding depth does not add expressivity in the single-task linear case, it is useful for examining the effects of permuted ordering, and deep linear networks are known to share properties with nonlinear networks BID41. In the second experiment, m = 16 and φ = ReLU.The are shown in FIG2. Remarkably, in the linear case, permuted ordering of shared layers does not lose accuracy compared to the single-task case. A similar gap in performance is seen in the nonlinear case, indicating that this behavior extends to more powerful models. Thus, the learned permuted layers are able to successfully adapt to their different orderings in different tasks. Looking at conditions that make this possible can shed further light on this behavior. For instance, consider T tasks t 1,..., t T, with input and output size both m, and optimal linear solutions F 1,..., F T, respectively. Let F 1,..., F T be m × m matrices, and suppose there exist matrices Number of tasks trained jointly DISPLAYFORM0 Shared model boundary DISPLAYFORM1 s (i,1,2) i Shared learned layer F j s (i,2,2) s (i,3,2) Figure 3: Soft ordering of shared layers. Sample soft ordering network with three shared layers. Soft ordering (Eq. 7) generalizes Eqs. 2 and 3, by learning a tensor S of task-specific scaling parameters. S is learned jointly with the F j, to allow flexible sharing across tasks and depths. The F j in this figure each include a shared weight layer and any nonlinearity. This architecture enables the learning of layers that are used in different ways at different depths for different tasks. DISPLAYFORM2 Then, because the matrix trace is invariant under cyclic permutations, the constraint arises that DISPLAYFORM3 In the case of random matrices induced by the random tasks above, the traces of the F i are all equal in expectation and concentrate well as their dimensionality increases. So, the restrictive effect of Eq. 5 on the expressivity of permuted ordering here is negligible. Adding a small number of task-specific scaling parameters. Of course, real world tasks are generally much more structured than random ones, so such reliable expressivity of permuted ordering might not always be expected. However, adding a small number of task-specific scaling parameters can help adapt learned layers to particular tasks. This observation has been previously exploited in the parallel ordering setting, for learning task-specific batch normalization scaling parameters BID3 and controlling communication between columns BID35. Similarly, in the permuted ordering setting, the constraint induced by Eq. 5 can be reduced by adding task-specific scalars DISPLAYFORM4, and s 1 = 1. The constraint given by Eq. 5 then reduces to DISPLAYFORM5 which are defined when tr(F i) = 0 ∀ i < T. Importantly, the number of task-specific parameters does not depend on m, which is useful for scalability as well as encouraging maximal sharing between tasks. The idea of using a small number of task-specific scaling parameters is incorporated in the soft ordering approach introduced in the next section. Permuted ordering tests the parallel ordering assumption, but still fixes an a priori layer ordering for each task before training. Here, a more flexible soft ordering approach is introduced, which allows jointly trained models to learn how layers are applied while simultaneously learning the layers themselves. Consider again a core network of depth D with layers W 1,..., W D learned and shared across tasks. The soft ordering model for task t i is defined as follows: DISPLAYFORM0 where DISPLAYFORM1 ), and each s (i,j,k) is drawn from S: a tensor of learned scales for each task t i for each layer W j at each depth k. Figure 3 shows an example of a ing depth three model. Motivated by Section 3.2 and previous work BID35, S adds only D 2 scaling parameters per task, which is notably not a function of the size of any W j. The constraint that all s (i,j,k) sum to 1 for any (i, k) is implemented via softmax, and emphasizes the idea that a soft ordering is what is being learned; in particular, this formulation subsumes any fixed layer ordering ρ i by s (i,ρi(k),k) = 1 ∀ (i, k). S can be learned jointly with the other learnable parameters in the W k, E i, and D i via backpropagation. In training, all s (i,j,k) are initialized with equal values, to reduce initial bias of layer function across tasks. It is also helpful to apply dropout after each shared layer. Aside from its usual benefits BID45, dropout has been shown to be useful in increasing the generalization capacity of shared representations BID7. Since the trained layers in Eq. 7 are used for different tasks and in different locations, dropout makes them more robust to supporting different functionalities. These ideas are tested empirically on the MNIST, UCI, Omniglot, and CelebA data sets in the next section. These experiments evaluate soft ordering against fixed ordering MTL and single-task learning. The first experiment applies them to intuitively related MNIST tasks, the second to "superficially unrelated" UCI tasks, the third to the real-world problem of Omniglot character recognition, and the fourth to large-scale facial attribute recognition. In each experiment, single task, parallel ordering (Eq. 2), permuted ordering (Eq. 3), and soft ordering (Eq. 7) train an equivalent set of core layers. In permuted ordering, the order of layers were randomly generated for each task each trial. See Appendix A for additional details, including additional details specific to each experiment. This experiment evaluates the ability of multitask methods to exploit tasks that are intuitively related, but have disparate input representations. Binary classification problems derived from the MNIST hand-written digit dataset are a common test bed for evaluating deep learning methods that require multiple tasks BID9 BID21 BID49. Here, the goal of each task is to distinguish between two distinct randomly selected digits. To create initial dissimilarity across tasks that multitask models must disentangle, each E i is a random frozen fullyconnected ReLU layer with output size 64. There are four core layers, each a fully-connected ReLU layer with 64 units. Each D i is an unshared dense layer with a single sigmoid classification output. Results are shown in FIG3. The relative performance of permuted ordering and soft ordering compared to parallel ordering increases with the number of tasks trained jointly FIG3, showing how flexibility of order can help in scaling to more tasks. This is consistent with the hypothesis that parallel ordering has increased negative effects as the number of tasks increases. The next experiment evaluates the ability of soft ordering to integrate information across a diverse set of "superficially unrelated" tasks BID34, i.e., tasks with no immediate intuition for how they may be related. Ten tasks are taken from some of most popular UCI classification data sets BID28. Descriptions of these tasks are given in FIG6. Inputs and outputs have no a priori shared meaning across tasks. Each E i is a learned fully-connected ReLU layer with output size 32. There are four core layers, each a fully-connected ReLU layer with 32 units. Each D i is an unshared dense softmax layer for the given number of classes. The in FIG6 (b) show that, while parallel and permuted show no improvement in error after the first 1000 iterations, soft ordering significantly outperforms the other methods. With this flexible layer ordering, the model is eventually able to exploit significant regularities underlying these seemingly disparate domains. The Omniglot dataset BID22 consists of fifty alphabets, each of which induces a different character recognition task. Deep MTL approaches have recently shown promise on this dataset BID49. It is a useful benchmark for MTL because the large number of tasks allows analysis of performance as a function of the number of tasks trained jointly, and there is clear intuition for how knowledge of some alphabets will increase the ability to learn others. Omniglot is also a good setting for evaluating the ability of soft ordering to learn how to compose layers in different ways for different tasks: it was developed as a problem with inherent composibility, e.g., similar kinds of strokes are applied in different ways to draw characters from different alphabets BID22. Consequently, it has been used as a test bed for deep generative models BID38. To evaluate performance for a given number of tasks T, a single random ordering of tasks was created, from which the first T tasks are considered. Train/test splits are created in the same way as previous work BID49, using 10% or 20% of data for testing. This experiment is a scale-up of the previous experiments in that it evaluates soft ordering of convolutional layers. The models are made as close as possible in architecture to previous work BID49, while allowing soft ordering to be applied. There are four core layers, each convolutional followed by max pooling. E i (x i) = x i ∀ i, and each D i is a fully-connected softmax layer with output size equal to the number of classes. The show that soft ordering is able to consistently outperform other deep MTL approaches (Figure 6). The improvements are robust to the number of tasks (Figure 6a) and the amount of training data (Figure 6c), suggesting that soft ordering, not task complexity or model complexity, is responsible for the improvement. Permuted ordering performs significantly worse than parallel ordering in this domain. This is not surprising, as deep vision systems are known to induce a common feature hierarchy, especially within the first couple of layers BID24 BID23. Parallel ordering has this hierarchy built in; for permuted ordering it is more difficult to exploit. However, the existence of this feature hierarchy does not preclude the possibility that the functions (i.e., layers) used to produce the hierarchy may be useful in other contexts. Soft ordering allows the discovery of such uses. Figure 6b shows how each layer is used more or less at different depths. The soft ordering model learns a "soft Figure 6: Omniglot . (a) Error by number of tasks trained jointly. Soft ordering significantly outperforms single task and both fixed ordering approaches for each number of tasks; (b) Distribution of learned layer usage by depth across all 50 tasks for a soft order run. The usage of each layer is correlated (or inversely correlated) with depth. This coincides with the understanding that there is some innate hierarchy in convolutional networks, which soft ordering is able to discover. For instance, the usage of Layer 3 decreases as the depth increases, suggesting that its primary purpose is low-level feature extraction, though it is still sees substantial use in deeper contexts; (c) Errors with all 50 tasks for different training set sizes. The first five methods are previous deep MTL BID49, which use multitask tensor factorization methods in a shared parallel ordering. Soft ordering significantly outperforms the other approaches, showing the approach scales to real-world tasks requiring specialized components such as convolutional layers.hierarchy" of layers, in which each layer has a distribution of increased or decreased usage at each depth. In this case, the usage of each layer is correlated (or inversely correlated) with depth. For instance, the usage of Layer 3 decreases as the depth increases, suggesting that its primary purpose is low-level feature extraction, though it is still sees substantial use in deeper contexts. Section 5 describes an experiment that further investigates the behavior of a single layer in different contexts. Although facial attributes are all high-level concepts, they do not intuitively exist at the same level of a shared hierarchy (even one that is learned; . Rather, these concepts are related in multiple subtle and overlapping ways in semantic space. This experiment investigates how a soft ordering approach, as a component in a larger system, can exploit these relationships. The CelebA dataset consists of ≈200K 178 × 218 color images, each with binary labels for 40 facial attributes BID30 . In this experiment, each label defines a task, and parallel and soft order models are based on a ResNet-50 vision model BID13, which has also been used in recent state-of-the-art approaches to CelebA BID10 BID14 . Let E i be a ResNet-50 model truncated to the final average pooling layer, followed by a linear layer projecting the embedding to size 256. E i is shared across all tasks. There are four core layers, each a dense ReLU layer with 256 units. Each D i is an unshared dense sigmoid layer. Parallel ordering and soft ordering models were compared. To further test the robustness of learning, models were trained with and without the inclusion of an additional facial landmark detection regression task. Soft order models were also tested with and without the inclusion of a fixed identity layer at each depth. The identity layer can increase consistency of representation across contexts, which can ease learning of each layer, while also allowing soft ordering to tune how much total non-identity transformation to use for each individual task. This is especially relevant for the case of attributes, since different tasks can have different levels of complexity and abstraction. The are given in Figure 7c . Existing work that used a ResNet-50 vision model showed that using a parallel order multitask model improved test error over single-task learning from 10.37 to 9.58 BID14 . With our faster training strategy and the added core layers, our parallel ordering model achieves a test error of 10.21. The soft ordering model yielded a substantial improvement beyond this to 8.79, demonstrating that soft ordering can add value to a larger deep learning system. Including landmark detection yielded a marginal improvement to 8.75, while for parallel ordering it degraded performance slightly, indicating that soft ordering is more robust to joint training of diverse kinds of tasks. Including the identity layer improved performance to 8.64, though with both the land- Figure 7: CelebA . Layer usage by depth (a) without and (b) with inclusion of the identity layer. In both cases, layers with lower usage at lower depths have higher usage at higher depths, and vice versa. The identity layer almost always sees increased usage; its application can increase consistency of representation across contexts. (c) Soft order models achieve a significant improvement over parallel ordering, and receive a boost from including the identity layer. The first two rows are previous work with ResNet-50 that show their baseline improvement from single task to multitask. mark detection and the identity layer this improvement was slightly diminished. One explanation for this degredation is that the added flexibility provided by the identity layer offsets the regularization provided by landmark detection. Note that previous work has shown that adaptive weighting of task loss BID14 BID39, data augmentation and ensembling BID10, and a larger underlying vision model each can also yield significant improvements. Aside from soft ordering, none of these improvements alter the multitask topology, so their benefits are expected to be complementary to that of soft ordering demonstrated in this experiment. By coupling them with soft ordering, greater improvements should be possible. Figures 7a-b characterize the usage of each layer learned by soft order models. Like in the case of Omniglot, layers that are used less at lower depths are used more at higher depths, and vice versa, giving further evidence that the models learn a "soft hierarchy" of layer usage. When the identity layer is included, its usage is almost always increased through training, as it allows the model to use smaller specialized proportions of nonlinear structure for each individual task. The success of soft layer ordering suggests that layers learn functional primitives with similar effects in different contexts. To explore this idea qualitatively, the following experiment uses generative visual tasks. The goal of each task is to learn a function (x, y) → v, where (x, y) is a pixel coordinate and v is a brightness value, all normalized to. Each task is defined by a single image of a "4" drawn from the MNIST dataset; all of its pixels are used as training data. Ten tasks are trained using soft ordering with four shared dense ReLU layers of 100 units each. E i is a linear encoder that is shared across tasks, and D i is a global average pooling decoder. Thus, task models are distinguished completely by their learned soft ordering scaling parameters s t. To visualize the behavior of layer l at depth d for task t, the predicted image for task t is generated across varying magnitudes of s (t,l,d). The for the first two tasks and the first layer are shown in Table 1. Similar function is observed in each of the six contexts, suggesting that the layers indeed learn functional primitives. Table 1: Example behavior of a soft order layer. For each task t, and at each depth d, the effect of increasing the activation of of this particular layer is to expand the left side of the "4" in a manner appropriate to the functional context (e.g., the magnitude of the effect decreases with depth). Results for other layers are similar, suggesting that the layers implement functional primitives. In the interest of clarity, the soft ordering approach in this paper was developed as a relatively small step away from the parallel ordering assumption. To develop more practical and specialized methods, inspiration can be taken from recurrent architectures, the approach can be extended to layers of more general structure, and applied to training and understanding general functional building blocks. Connections to recurrent architectures. Eq. 7 is defined recursively with respect to the learned layers shared across tasks. Thus, the soft-ordering architecture can be viewed as a new type of recurrent architecture designed specifically for MTL. From this perspective, Figure 3 shows an unrolling of a soft layer module: different scaling parameters are applied at different depths when unrolled for different tasks. Since the type of recurrence induced by soft ordering does not require task input or output to be sequential, methods that use recurrence in such a setting are of particular interest BID26 BID27 BID36 BID44 BID50. Recurrent methods can also be used to reduce the size of S below O(T D 2), e.g., via recurrent hypernetworks BID11. Finally, Section 4 demonstrated soft ordering where shared learned layers were fully-connected or convolutional; it is also straightforward to extend soft ordering to shared layers with internal recurrence, such as LSTMs BID15. In this setting, soft ordering can be viewed as inducing a higher-level recurrence. Generalizing the structure of shared layers. For clarity, in this paper all core layers in a given setup had the same shape. Of course, it would be useful to have a generalization of soft ordering that could subsume any modern deep architecture with many layers of varying structure. As given by Eq. 7, soft ordering requires the same shape inputs to the element-wise sum at each depth. Reshapes and/or resampling can be added as adapters between tensors of different shape; alternatively, a function other than a sum could be used. For example, instead of learning a weighting across layers at each depth, a probability of applying each module could be learned in a manner similar to adaptive dropout BID1 BID25 or a sparsely-gated mixture of experts BID43. Furthermore, the idea of a soft ordering of layers can be extended to soft ordering over modules with more general structure, which may more succinctly capture recurring modularity. Training generalizable building blocks. Because they are used in different ways at different locations for different tasks, the shared trained layers in permuted and soft ordering have learned more general functionality than layers trained in a fixed location or for a single task. A natural hypothesis is that they are then more likely to generalize to future unseen tasks, perhaps even without further training. This ability would be especially useful in the small data regime, where the number of trainable parameters should be limited. For example, given a collection of these layers trained on a previous set of tasks, a model for a new task could learn how to apply these building blocks, e.g., by learning a soft order, while keeping their internal parameters fixed. Learning an efficient set of such generalizable layers would then be akin to learning a set of functional primitives. Such functional modularity and repetition is evident in the natural, technological and sociological worlds, so such a set of functional primitives may align well with complex real-world models. This perspective is related to recent work in reusing modules in the parallel ordering setting BID9. The different ways in which different tasks learn to use the same set of modules can also help shed light on how tasks are related, especially those that seem superficially disparate (e.g., by extending the analysis performed for FIG3), thus assisting in the discovery of real-world regularities. This paper has identified parallel ordering of shared layers as a common assumption underlying existing deep MTL approaches. This assumption restricts the kinds of shared structure that can be learned between tasks. Experiments demonstrate how direct approaches to removing this assumption can ease the integration of information across plentiful and diverse tasks. Soft ordering is introduced as a method for learning how to apply layers in different ways at different depths for different tasks, while simultaneously learning the layers themselves. Soft ordering is shown to outperform parallel ordering methods as well as single-task learning across a suite of domains. These show that deep MTL can be improved while generating a compact set of multipurpose functional primitives, thus aligning more closely with our understanding of complex real-world processes. All experiments were run with the Keras deep learning framework BID5, using the Tensorflow backend BID0. All experiments used the Adam optimizer with default parameters BID20 unless otherwise specified. In each iteration of multitask training, a random batch for each task is processed, and the are combined across tasks into a single update. Compared to alternating batches between tasks BID32, processing all tasks simultaneously simplified the training procedure, and led to faster and lower final convergence. When encoders are shared, the inputs of the samples in each batch are the same across tasks. Cross-entropy loss was used for all classification tasks. The overall validation loss is the sum over all per task validation losses. In each experiment, single task, parallel ordering (Eq. 2), permuted ordering (Eq. 3), and soft ordering (Eq. 7) trained an equivalent set of core layers. In permuted ordering, the order of layers was randomly generated for each task each trial. Several trials were run for each setup to produce confidence bounds. Input pixel values were normalized to be between 0 and 1. The training and test sets for each task were the MNIST train and test sets restricted to the two selected digits. A dropout rate of 0.5 was applied at the output of each core layer. Each setup was trained for 20K iterations, with each batch consisting of 64 samples for each task. When randomly selecting the pairs of digits that define a set of tasks, digits were selected without replacement within a task, and with replacement across tasks, so there were 45 possible tasks, and 45 k possible sets of tasks of size k. For all tasks, each input feature was scaled to be between 0 and 1. For each task, training and validation data were created by a random 80-20 split. This split was fixed across trials. A dropout rate of 0.8 was applied at the output of each core layer. To enable soft ordering, the output of all shared layers must have the same shape. For comparability, the models were made as close as possible in architecture to previous work BID49, in which models had four sharable layers, three of which were 2D convolutions followed by 2 × 2 max-pooling, of which two had 3 × 3 kernels. So, in this experiment, to evaluate soft ordering of convolutional layers, there were four core layers, each a 2D convolutional layer with ReLU activation and kernel size 3 × 3. Each convolutional layer was followed by a 2 × 2 maxpooling layer. The number of filters for each convolutional layer was set at 53, which makes the number of total model parameters as close as possible to the reference model. A dropout rate of 0.5 was applied at the output of after each core layer. The Omniglot dataset consists of 105 × 105 black-and-white images. There are fifty alphabets of characters and twenty images per character. To be compatible with the shapes of shared layers, the input was zero-padded along the third dimension so that its shape was 105 × 105 × 53, i.e., with the first 105 × 105 slice containing the image data and the remainder zeros. To evaluate approaches on k tasks, a random ordering of the fifty tasks was created and fixed across all trials. In each trial, the first k tasks in this ordering were trained jointly for 5000 iterations, with each training batch containing k random samples, one from each task. The fixed ordering of tasks was as follows:[Gujarati, Sylheti, Arcadian, Tibetan, Old Church Slavonic (Cyrillic), Angelic, Malay (Jawi-Arabic), Sanskrit, Cyrillic, Anglo-Saxon Futhorc, Syriac (Estrangelo), Ge'ez, Japanese (katakana), Keble, Manipuri, Alphabet of the Magi, Gurmukhi, Korean, Early Aramaic, Atemayar Qelisayer, Tagalog The training, validation, and test splits provided by BID30 were used. There are ≈160K images for training, ≈20K for validation, and ≈20K for testing. The dataset contains 20 images of each of approximately ≈10K celebrities. The images for a given celebrity occur in only one of the three dataset splits, so models must also generalize to new human identities. The weights for ResNet-50 were initialized with the pre-trained imagenet weights provided in the Keras framework BID5. Image preprocessing was done with the default Keras image preprocessing function, including resizing all images to 224 × 224.The output for the facial landmark detection task is a 10 dimensional vector indicating the (x, y) locations of five landmarks, normalized between 0 and 1. Mean squared error was used as the training loss. When landmark detection is included, the target metric is still attribute classification error. This is because the aligned CelebA images are used, so accurate landmark detection is not a challenge, but including it as an additional task can still provide additional regularization to a multitask model. A dropout rate of 0.5 was applied at the output of after each core layer. The experiments used a batch size of 32. After validation loss converges via Adam, models are trained with RMSProp with learning rate 1e −5, which is a similar approach to that used by BID10. To produce the ing image for a fixed model, the predictions at each pixel locations were generated, denormalized, and mapped back to the pixel coordinate space. The loss used for this experiment was mean squared error (MSE). Since all pixels for a task image are used for training, there is no sense of generalization to unseen data within a task. As a , no dropout was used in this experiment. Task models are distinguished completely by their learned soft ordering scaling parameters s t, so the joint model can be viewed as a generative model which generates different 4's for varying values of s t. To visualize the behavior of layer l at depth d for task t, the output of the model for task t was visualized while sweeping s (t,l,d) across. To enable this sweeping while keeping the rest of the model behavior fixed, the softmax for each task at each depth was replaced with a sigmoid activation. Note that due to the global avgerage pooling decoder, altering the weight of a single layer has no observable effect at depth four. | Relaxing the constraint of shared hierarchies enables more effective deep multitask learning. | 1,289 | scitldr |
We propose a generic framework to calibrate accuracy and confidence (score) of a prediction through stochastic inferences in deep neural networks. We first analyze relation between variation of multiple model parameters for a single example inference and variance of the corresponding prediction scores by Bayesian modeling of stochastic regularization. Our empirical observation shows that accuracy and score of a prediction are highly correlated with variance of multiple stochastic inferences given by stochastic depth or dropout. Motivated by these facts, we design a novel variance-weighted confidence-integrated loss function that is composed of two cross-entropy loss terms with respect to ground-truth and uniform distribution, which are balanced by variance of stochastic prediction scores. The proposed loss function enables us to learn deep neural networks that predict confidence calibrated scores using a single inference. Our algorithm presents outstanding confidence calibration performance and improves classification accuracy with two popular stochastic regularization techniques---stochastic depth and dropout---in multiple models and datasets; it alleviates overconfidence issue in deep neural networks significantly by training networks to achieve prediction accuracy proportional to confidence of prediction. Deep neural networks have achieved remarkable performance in various tasks, but have critical limitations in reliability of their predictions. One example is that inference are often overly confident even for unseen or tricky examples; the maximum scores of individual predictions are very high even for out-of-distribution examples and consequently distort interpretation about the predictions. Since many practical applications including autonomous driving, medical diagnosis, and machine inspection require accurate uncertainty estimation as well as high prediction accuracy for each inference, such an overconfidence issue makes deep neural networks inappropriate to be deployed for real-world problems in spite of their impressive accuracy. Regularization is a common technique in training deep neural networks to avoid overfitting problems and improve generalization accuracy BID18; BID6; BID7. However, their objectives are not directly related to generating score distributions aligned with uncertainty of individual predictions. In other words, existing deep neural networks are inherently poor at calibrating prediction accuracy and confidence. Our goal is to learn deep neural networks that are able to estimate accuracy and uncertainty of each prediction at the same time. Hence, we propose a generic framework to calibrate prediction score (confidence) with accuracy in deep neural networks. Our algorithm starts with an observation that variance of prediction scores measured from multiple stochastic inferences is highly correlated with accuracy and confidence of the prediction based on the average score, where we employ stochastic regularization techniques such as stochastic depth or dropout to obtain multiple stochastic inference . We also interpret stochastic regularization as a Bayesian model, which shows relation between stochastic modeling and stochastic inferences of deep neural networks. By exploiting these properties, we design a loss function to enable deep neural network to predict confidence-calibrated scores based only on a single prediction, without stochastic inferences. Our contribution is summarized below:• We provide a generic framework to estimate uncertainty of a prediction based on stochastic inferences in deep neural networks, which is motivated by empirical observation and theoretical analysis.• We design a variance-weighted confidence-integrated loss function in a principled way without hyper-parameters, which enables deep neural networks to produce confidencecalibrated predictions even without stochastic inferences.• The proposed framework presents outstanding performance to reduce overconfidence issue and estimate accurate uncertainty in various architectures and datasets. The rest of the paper is organized as follows. We first discuss prior research related to our algorithm, and describe theoretical for Bayesian interpretation of our approach in Section 2 and 3, respectively. Section 4 presents our confidence calibration algorithm through stochastic inferences, and Section 5 illustrates experimental . Uncertainty estimation is a critical problem in deep neural networks and receives growing attention from machine learning community. Bayesian approach is a common tool to provide a mathematical framework for uncertainty estimation in deep neural networks. However, exact Bayesian inference is not tractable in deep neural networks due to its high computational cost, and various approximate inference techniques-, Laplace approximation and variational inference BID0 BID3 and deep ensembles BID8. All the post-processing methods require a hold-out validation set to adjust prediction scores after training, and the ensemble-based technique employs multiple models to estimate uncertainty. Stochastic regularization is a common technique to improve generalization performance by injecting random noise to deep neural networks. The most notable method is BID18, which randomly drops their hidden units by multiplying Bernoulli random noise. There exist several variants, for example, dropping weights BID21 or skipping layers BID6. Most stochastic regularization methods exploit stochastic inferences during training, but perform deterministic inferences using the whole network during testing. On the contrary, we also use stochastic inferences to obtain diverse and reliable outputs during testing. Although the following works do not address uncertainty estimation, their main idea is relevant to our objective. Label smoothing BID19 encourages models to be less confident, by preventing a network from assigning the full probability to a single class. The same loss function is discussed to train confidence-calibrated classifiers in BID9, but it focuses on how to discriminate in-distribution and out-of-distribution examples, rather than estimating uncertainty or alleviating miscalibration of in-distribution examples. On the other hand, BID15 claims that blind label smoothing and penalizing entropy enhances accuracy by integrating loss functions with the same concept with BID19; BID9, but improvement is marginal in practice. This section describes Bayesian interpretation of stochastic regularization in deep neural networks, and discusses relation between stochastic regularization and uncertainty modeling. Deep neural networks are prone to overfit due to their large number of parameters, and various regularization techniques including weight decay, dropout BID18, and batch normalization BID7 have been employed to alleviate the issue. One popular class of regularization techniques is stochastic regularization, which introduces random noise to a network for perturbing its inputs or weights. We focus on the multiplicative binary noise injection, where random binary noise is applied to the inputs or weights by elementwise multiplication since such stochastic regularization techniques are widely used BID18; BID21; BID6. Note that input perturbation can be reformulated as weight perturbation. For example, dropout-binary noise injection to activations-is intertpretable as weight perturbation that masks out all the weights associated with the dropped inputs. Therefore, if a classification network modeling p(y|x, θ) with parameters θ is trained with stochastic regularization methods by minimizing the cross entropy loss, its objective can be defined by DISPLAYFORM0 whereω i = θ i is a set of perturbed parameters by elementwise multiplication with random noise sample i ∼ p, and (x i, y i) ∈ D is a pair of input and output in training dataset D. Note thatω i is a random sample from p(ω) given by the product of the deterministic parameter θ and a random noise i.At inference time, the network is parameterized by the expectation of the perturbed parameters, DISPLAYFORM1 Given the dataset D with N examples, Bayesian objective is to estimate the posterior distribution of the model parameter, denoted by p(ω|D), to predict a label y for an input x, which is given by DISPLAYFORM0 A common technique for the posterior estimation is variational approximation, which introduces an approximate distribution q θ (ω) and minimizes Kullback-Leibler (KL) divergence with the true posterior D KL (q θ (ω)||p(ω|D)) as follows: DISPLAYFORM1 The intractable integral and summation over the entire dataset in Equation 4 is approximated by Monte Carlo method and mini-batch optimization ing in DISPLAYFORM2 DISPLAYFORM3 is a sample from the approximate distribution, S is the number of samples, and M is the size of a mini-batch. Note that the first term is data likelihood and the second term is divergence of the approximate distribution with respect to the prior distribution. Suppose that we train a classifier with 2 regularization by a stochastic gradient descent method. Then, the loss function in Equation 1 is rewritten aŝ DISPLAYFORM0 where 2 regularization is applied to the deterministic parameters θ with weight λ. Optimizing this loss function is equivalent to optimizing Equation 5 if there exists a proper prior p(ω) and q θ (ω) is approximated as a Gaussian mixture distribution BID1. Note that Gal & DISPLAYFORM1 Following BID1 and BID20, we estimate the predictive mean and uncertainty by Monte Carlo approximation by drawing binary noise samples DISPLAYFORM2 as DISPLAYFORM3 where y = (y 1, . . ., y C) denotes a vector of C class labels. Note that the binary noise samples realize stochastic inferences such as stochastic depth and dropout by elementwise multiplication with model parameter θ. Equation 8 means that the average prediction and its variance can be computed directly from multiple stochastic inferences. We present a novel confidence calibration technique for prediction in deep neural networks, which is given by a variance-weighted confidence-integrated loss function. We present our observation that variance of multiple stochastic inferences is closely related to accuracy and confidence of predictions, and provide an end-to-end training framework for confidence self-calibration. Then, prediction accuracy and uncertainty are directly accessible from the predicted scores obtained from a single forward pass. This section presents our observation from stochastic inferences and technical details about our confidence calibration technique. Equation FORMULA9 suggests that variation of models 1 is correlated to variance of multiple stochastic predictions for a single example. In other words, by observing variation of multiple stochastic inferences, we can estimate accuracy and uncertainty of the prediction given by average of the stochastic inferences corresponding to an example. FIG0 presents how variance of multiple stochastic inferences given by stochastic depth or dropout is related to accuracy and confidence of the corresponding average prediction, where the confidence is measured by the maximum score of the average prediction. In the figure, accuracy and score of each bin are computed with the examples belonging to the corresponding bin of the normalized variance. We present from CIFAR-100 with ResNet-34 and VGGNet with 16 layers. The histograms illustrate the strong correlation between the predicted variance and the reliability-accuracy and confidence-of a prediction, and between accuracy and prediction. These suggest that one can disregard examples based on their prediction variances. Note that variance computation with more stochastic inferences provides more reliable estimation of accuracy and confidence. We first design a simple loss function for accuracy-score calibration by augmenting a confidenceintegrated loss L U to the standard cross-entropy loss term, which is given by DISPLAYFORM0 where H is the cross entropy loss function, p GT is the ground-truth distribution, p(y|x i, θ) is the predicted distribution with model parameter θ, U(y) is the uniform distribution, and ξ is a constant. The loss denoted by L 1 (·) is determined based on cross-entropy with the ground-truths and KLdivergence from the uniform distribution. The main idea of this loss function is to regularize with the uniform distribution by expecting the score distributions of uncertain examples to be flattened first while the distributions of confident ones remain intact, where the impact of the confidenceintegrated loss term is controlled by a global hyper-parameter β. The proposed loss function is also employed in BID15 to regularize deep neural networks and improve classification accuracy. However, BID15 does not discuss confidence calibration issues. On the other hand, BID9 discusses the same loss function but focuses on differentiating between in-distribution and out-of-distribution examples by measuring loss of each example based only on one of the two loss terms depending on its origin. Contrary to these approaches, we employ the loss function in Equation 9 for estimating prediction confidence in deep neural networks. Although the proposed loss makes sense intuitively, blind selection of a constant β limits its generality. Hence, we propose a more sophisticated confidence loss term by leveraging variance of multiple stochastic inferences. The strong correlation of accuracy and confidence with predicted variance observed in FIG0 shows great potential to make confidence-calibrated prediction by stochastic inferences. However, variance computation involves multiple stochastic inferences by executing multiple forward passes. Note that this property incurs additional computational cost and may produce inconsistent . To overcome these limitations, we propose a generic framework for training accuracy-score calibration networks whose prediction score from a single forward pass directly provides confidence of the prediction. In this framework, we combine two complementary loss terms as in Equation 9, but they are balanced by the variance measured by multiple stochastic inferences. Our variance-weighted confidence-integrated loss L(·) for the whole training data (x i, y i) ∈ D is defined by a linear interpolation of the standard cross-entropy loss with ground-truth L GT (·) and the cross-entropy with the uniform distribution L U (·), which is formally given by DISPLAYFORM0 where α i ∈ is a normalized variance,ω i,j (= θ i,j) is a sampled model parameter with binary noise for stochastic prediction, T is the number of stochastic inferences, and ξ i is a constant. The two terms in our variance-weighted confidence-integrated loss pushes the network toward opposite directions; the first term encourages the network to produce a high score for the ground truth label while the second term forces the network to predict the uniform distribution. These terms are linearly interpolated by a balancing coefficient α i, which is the normalized variance of individual example obtained by multiple stochastic inferences. Note that the normalized variance α i is unique for each training example and is used to measure model uncertainty. Therefore, optimizing our loss function produces gradient signals, forcing the prediction toward the uniform distribution for the examples with high uncertainty derived by high variance while intensifying prediction confidence of the examples with low variance. After training models in our framework, prediction of each testing example is made by a single forward pass. Unlike the ordinary models, however, a prediction score of our model is well-calibrated and represents confidence of the prediction, which means that we can rely more on the predictions with high scores. There are several score calibration techniques BID3; BID23; Naeini et al. FORMULA0; BID14 by adjusting confidence scores through postprocessing, among which BID3 proposes a method to calibrate confidence of predictions by scaling logits of a network using a global temperature τ. The scaling is performed before applying the softmax function, and τ is trained with validation dataset. As discussed in BID3, this simple technique is equivalent to maximize entropy of the output distribution p(y i |x i). It is also identical to minimize KL-divergence D KL (p(y i |x i)||U(y)) because DISPLAYFORM0 where ξ c is a constant. We can formulate another confidence-integrated loss with the entropy as DISPLAYFORM1 where γ is a constant. Equation 12 suggests that temperature scaling in BID3 is closely related to our framework. We choose two most widely adapted deep neural network architectures: ResNet and VGGNet. VGG architecture follows BID17, where we employ dropout BID18 before every fc layer except for the classification layer. In ResNet, instead of stacking conv layers directly, outputs of residual blocks are added to the input feature representation by residual connections as proposed in BID4. Stochastic depth BID6 is used for stochastic regularization in ResNet. Note that, as discussed in Section 3.3, both dropout and stochastic depth inject multiplicative binary noise to within-layer activations or residual blocks, they are equivalent to noise injection into network weights. Hence, training with 2 regularization term enables us to interpret stochastic depth and dropout by Bayesian models. We evaluate the proposed framework on two benchmarks, Tiny ImageNet and CIFAR-100. Tiny ImageNet contains 64 × 64 images with 200 object labels whereas CIFAR-100 has 32 × 32 images of 100 objects. There are 500 training images per class in both datasets. For testing, we use the validation set of Tiny ImageNet and the test set of CIFAR-100, which contain 50 and 100 images per class, respectively. To use the same network for two benchmarks, we resize images in Tiny ImageNet into 32 × 32.All networks are trained with stochastic gradient decent with the momentum of 0.9 for 300 epochs. We set the initial learning rate to 0.1 and exponentially decay it with factor of 0.2 at epoch 60, 120, 160, 200 and 250. Each batch consists of 64 and 256 training examples for ResNet and VGG architectures, respectively. To train networks with the proposed variance-weighted confidence-integrated loss, we draw T samples for each input image by default, and compute the normalized variance α by running T forward passes. The number of samples T is set to 5. The normalized variance is estimated based on the variance of Bhattacharyya coefficients between individual predictions and the average prediction. The trained models with the variance-weighted confidence-integrated (VWCI) loss are compared to the models with confidence-integrated (CI) losses for several different constant β's. We measure classification accuracy and expected calibration error (ECE) of the trained models. While classification accuracy shows regularization effect of the confidence-integrated loss term, ECE summarizes miscalibration of a model by measuring discrepancy between confidence and accuracy. Specifically, let B m be a set of indices of test examples whose scores for the ground-truth labels fall into the score interval (DISPLAYFORM0, where M is the number of bins. Then, ECE is formally defined by DISPLAYFORM1 where N is the number of the test samples. Also, accuracy and confidence of each bin are given by DISPLAYFORM2 whereŷ i and y i are predicted and true label of the i-th example and p i is its predicted confidence. TAB1 presents of ResNet-34 and VGG-16 on both datasets. We observe that baseline methods with stochastic inferences reduce calibration error and the reduction becomes more significant in proportion to number of inferences. These imply benefit of stochastic inference for confidence calibration, and reflect performance of methods by multiplicative noise in BID1 ; BID11 . The models trained with VWCI loss consistently outperform baselines and is competitive to models with CI loss on both classification accuracy and confidence calibration. Stochastic inference and variance-driven weight allow us to measure uncertainty for each instance, and enable two oppositional terms to be well balanced by the measured uncertainty. The confidence loss term regularizes the network by forcing predictions to uniform distribution, and the proper estimation of its coefficient leads to accuracy gain and confidence calibration. Note that, by assigning a low coefficient to the loss term for a confident example, the network allows the example to remain confident whereas a high weight for an uncertain example reduces confidence of the prediction. The CI loss has a confidence loss term with fixed coefficient β. The networks trained with proper β show impressive improvement on both criteria, but their performance is sensitive to choice of β as this strategy ignores predictive uncertainty for confidence loss; an inappropriate choice of β even worsens accuracy and calibration error, e.g., ResNet-34 trained with CI[β = 0.01] on Tiny ImageNet. Also, there seems to be no single β that is globally optimal across architectures and benchmark datasets. For instance, training the network with CI[β = 0.01] on Tiny ImageNet gives the worst accuracy with ResNet-34 and the best accuracy with VGG-16. In the experiments, the CI loss often works well on CIFAR-100 due to high accuracy. The majority of examples are classified correctly and the overconfident property of deep neural networks do little harm for confidence calibration. Specifically, CI loss sometimes achieves slightly better performance than VWCI with a certain fixed coefficient β because the measured normalized variance by stochastic inferences and its range are small. On the other hand, in Tiny ImageNet dataset, performance of VMCI is consistently better than CI because Tiny ImageNet is substantially more challenging than CIFAR-100.A critical benefit of our variance-driven weight in the VWCI loss is the capability to maintain examples with high accuracy and high confidence. This is an important property for building real-world decision making systems with confidence interval, where the decisions should be both highly accurate and confident. FIG1 illustrates coverage of test examples varying the confidence threshold, and VWCI shows better coverage than CI because CI pushes all instances to uniform with the same strength β regardless of their uncertainty unlike VWCI. It is also notable that β for the best coverage is different from that for the best accuracy and ECE whereas VWCI balances these based on the predictive uncertainty. These suggest that using the predictive uncertainty for balancing the terms is preferable over setting a constant coefficient in our loss function. More experimental are presented in the supplementary document. We presented a generic framework for uncertainty estimation of a prediction in deep neural networks by calibrating accuracy and score based on stochastic inferences. Based on Bayesian interpretation of stochastic regularization and our empirical observation , we claim that variation of multiple stochastic inferences for a single example is a crucial factor to estimate uncertainty of the average prediction. Motivated by this fact, we design the variance-weighted confidence-integrated loss to learn confidence-calibrated networks and enable uncertainty to be estimated by a single prediction. The proposed algorithm is also useful to understand existing confidence calibration methods in a unified way, and we compared our algorithm with other variations within our framework to analyze their characteristics. | We propose a framework to learn confidence-calibrated networks by designing a novel loss function that incorporates predictive uncertainty estimated through stochastic inferences. | 1,290 | scitldr |
Real-life control tasks involve matters of various substances---rigid or soft bodies, liquid, gas---each with distinct physical behaviors. This poses challenges to traditional rigid-body physics engines. Particle-based simulators have been developed to model the dynamics of these complex scenes; however, relying on approximation techniques, their simulation often deviates from real-world physics, especially in the long term. In this paper, we propose to learn a particle-based simulator for complex control tasks. Combining learning with particle-based systems brings in two major benefits: first, the learned simulator, just like other particle-based systems, acts widely on objects of different materials; second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within. This enables the model to quickly adapt to new environments of unknown dynamics within a few observations. We demonstrate robots achieving complex manipulation tasks using the learned simulator, such as manipulating fluids and deformable foam, with experiments both in simulation and in the real world. Our study helps lay the foundation for robot learning of dynamic scenes with particle-based representations. Objects have distinct dynamics. Under the same push, a rigid box will slide, modeling clay will deform, and a cup full of water will fall with water spilling out. The diverse behavior of different objects poses challenges to traditional rigid-body simulators used in robotics BID31 BID30. Particle-based simulators aim to model the dynamics of these complex scenes BID18; however, relying on approximation techniques for the sake of perceptual realism, their simulation often deviates from real world physics, especially in the long term. Developing generalizable and accurate forward dynamics models is of critical importance for robot manipulation of distinct real-life objects. We propose to learn a differentiable, particle-based simulator for complex control tasks, drawing inspiration from recent development in differentiable physical engines BID0 BID3. In robotics, the use of differentiable simulators, together with continuous and symbolic optimization algorithms, has enabled planning for increasingly complex whole body motions with multi-contact and multi-object interactions BID32 ). Yet these approaches have only tackled local interactions of rigid bodies. We develop dynamic particle interaction networks (DPINets) for learning particle dynamics, focusing on capturing the dynamic, hierarchical, and long-range interactions of particles FIG0 -c). DPI-Nets can then be combined with classic perception and gradient-based control algorithms for robot manipulation of deformable objects FIG0 ).Learning a particle-based simulator brings in two major benefits. First, the learned simulator, just like other particle-based systems, acts widely on objects of different states. DPI-Nets have successfully captured the complex behaviors of deformable objects, fluids, and rigid-bodies. With learned DPINets, our robots have achieved success in manipulation tasks that involve deformable objects of complex physical properties, such as molding plasticine to a target shape. Our project page: http://dpi.csail.mit.edu Perception and control with the learned model. Our system first reconstructs the particle-based shape from visual observation. It then uses gradient-based trajectory optimization to search for the actions that produce the most desired output. Second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within. This enables the model to quickly adapt to new environments of unknown dynamics within a few observations. Experiments suggest that DPI-Nets quickly learn to adapt to characterize a novel object of unknown physical parameters by doing online system identification. The adapted model also helps the robot to successfully manipulate object in the real world. DPI-Nets combine three key features for effective particle-based simulation and control: multi-step spatial propagation, hierarchical particle structure, and dynamic interaction graphs. In particular, it employs dynamic interaction graphs, built on the fly throughout manipulation, to capture the meaningful interactions among particles of deformable objects and fluids. The use of dynamic graphs allows neural models to focus on learning meaningful interactions among particles, and is crucial for obtaining good simulation accuracy and high success rates in manipulation. As objects deform when robots interact with them, a fixed interaction graph over particles is insufficient for robot manipulating non-rigid objects. Experiments demonstrate that DPI-Nets significantly outperform interaction networks BID0, HRN BID19, and a few other baselines. More importantly, unlike previous paper that focused purely on forward simulation, we have applied our model to downstream control tasks. Our DPI-Nets enable complex manipulation tasks for deformable objects and fluids, and adapts to scenarios with unknown physical parameters that need to be identified online. We have also performed real-world experiments to demonstrate our model's generalization ability. Differentiable physics simulators. Researchers have developed many differentiable physical simulators BID7 BID6 BID31, among which some can also provide analytical gradients BID30. In particular, BID0 and BID3 have explored learning a simulator from data by approximating object interactions with neural networks. BID16 proposed learning to propagate signals along the interaction graph and extended to partially observable scenarios. These methods mostly focus on modeling rigid body dynamics. Differentiable simulators for deformable objects have been less studied. Recently, BID25 proposed SPNets for differentiable simulation of position-based fluids BID17 ). An inspiring concurrent work from BID19 explored learning to approximate particle dynamics of deformable shapes with the Hierarchical Relation Network (HRN). Compared with these papers, we introduce state-specific modeling and dynamic graphs for accurate forward prediction for different states of matter (rigid bodies, deformable shapes, fluids). We also demonstrate how the learned dynamics model can be used for control in both simulation and real world. Our approach is also complementary to some recent work on learning to discover the interaction graphs BID33 BID13. Our model can also be naturally augmented with a perception module to handle raw visual input, as suggested by BID34; BID35 BID9.Model-predictive control with a differentiable simulator. Many recent papers have studied model-predictive control with deep networks BID15 BID10 BID20 BID8 BID28. They often learn an abstract state transition function, instead of an explicit account of the environment BID21, and then use the learned function to facilitate training of a policy network. A few recent papers have employed analytical, differentiable simulators (de Avila BID5 BID25 for control problems, such as tool manipulation and tool-use planning BID32 . Our model builds on and extends these approaches by learning a general physics simulator that takes raw object observations (e.g., positions, velocities) of each particle as input. We then integrate it into classic trajectory optimization algorithms for control. Compared with pure analytical simulators, our learned simulator can better generalize to novel testing scenarios where object and environment parameters are unknown. A few papers have explored using interaction networks for planning and control. They often learn a policy based on interaction networks' rollouts BID11. In contrast, our model learns a dynamics simulator and directly optimizes trajectories for continuous control. Recently, BID24 have applied interaction networks for control, and BID16 have further extended interaction nets to handle instance signal propagation for controlling multiple rigid bodies under partial observations. Compared with them, our dynamic particle interaction network simulate and control deformable, particle-based objects, using dynamic graphs to tackle scenes with complex object interactions. We first describe how interaction networks BID0 represent the physical system; we then extend them for particle-based dynamics. The interactions within a physical system are represented as a directed graph, G = O, R, where vertices O = {o i} represent objects and edges R = {r k} represent relations. Specifically, DISPLAYFORM0, where x i = q i,q i is the state of object i, containing its position q i and velocityq i. a o i denotes its attributes (e.g., mass, radius). For relation, we have DISPLAYFORM1 where u k is the receiver, v k is the sender, and both are integers. a r k is the type and attributes of relation k (e.g., collision, spring connection).The goal is to build a learnable physical engine to capture the underlying physical interactions using function approximators φ. The learned model can then be used to infer the system dynamics and predict the future from the current interaction graph as G t+1 = φ(G t), where G t denotes the scene state at time t. Interaction networks. BID0 proposed interaction networks (IN), a generalpurpose, learnable physics engine that performs object-and relation-centric reasoning about physics. INs define an object function f O and a relation function f R to model objects and their relations in a compositional way. The future state at time t + 1 is predicted as DISPLAYFORM2 denotes object i at time t, u k and v k are the receiver and sender of relation r k respectively, and N i denotes the relations where object i is the receiver. Propagation networks. A limitation of INs is that at every time step t, it only considers local information in the graph G and cannot handle instantaneous propagation of forces, which however is a common phenomenon in rigid-body dynamics. BID16 proposed propagation networks to handle the instantaneous propagation of forces by doing multi-step message passing. Specifically, they first employed the ideas on fast training of RNNs BID14 BID1 to encode the shared information beforehand and reuse them along the propagation steps. The encoders for objects are denoted as f DISPLAYFORM3 At time t, denote the propagating influence from relation k at propagation step l as e l k,t, and the propagating influence from object i as h l i,t. For step 1 ≤ l ≤ L, propagation can be described asStep 0: h DISPLAYFORM4 Step DISPLAYFORM5 DISPLAYFORM6 Output: DISPLAYFORM7 where f O denotes the object propagatorand f R denotes the relation propagator. Particle-based system is widely used in physical simulation due to its flexibility in modeling various types of objects BID18. We extend existing systems that model object-level interactions to allow particle-level deformation. Consider object set {o i}, where each object DISPLAYFORM0.. |oi| is represented as a set of particles. We now define the graph on the particles and the rules for influence propagation. Dynamic graph building. The vertices of the graph are the union of particles for all objects DISPLAYFORM1..|O|,k=1...|oi|. The edges R between these vertices are dynamically generated over time to ensure efficiency and effectiveness. The construction of the relations is specific to environment and task, which we'll elaborate in Section 4. A common choice is to consider the neighbors within a predefined distance. An alternative is to build a static, complete interaction graph, but it has two major drawbacks. First, it is not efficient. In many common physical systems, each particle is only interacting with a limited set of other particles (e.g., those within its neighborhood). Second, a static interaction graph implies a universal, continuous neural function approximator; however, many physical interactions involve discontinuous functions (e.g. contact). In contrast, using dynamic graphs empowers the model to tackle such discontinuity. Hierarchical modeling for long-range dependence. Propagation networks BID16 require a large L to handle long-range dependence, which is both inefficient and hard to train. Hence, we add one level of hierarchy to efficiently propagate the long-range influence among particles BID19. For each object that requires modeling of the long-range dependence (e.g. rigid-body), we cluster the particles into several non-overlapping clusters. For each cluster, we add a new particle as the cluster's root. Specifically, for each object o i that requires hierarchical modeling, the corresponding roots are denoted asõ i = {õ k i} k=1...|õi|, and the particle set containing all the roots is denoted asÕ = {õ k i} i=1...|O|,k=1...|õi|. We then construct an edge set R LeafToRoot that contains directed edges from each particle to its root, and an edge set R RootToLeaf containing directed edges from each root to its leaf particles. For each object that need hierarchical modeling, we add pairwise directed edges between all its roots, and denote this edge set as R RootToRoot.We employ a multi-stage propagation paradigm: first, propagation among leaf nodes, φ LeafToLeaf (O, R); second, propagation from leaf nodes to root nodes, φ LeafToRoot (O ∪ O, R LeafToRoot); third, propagation between roots, φ RootToRoot (Õ, R RootToRoot); fourth, propagation from root to leaf, φ RootToLeaf (O ∪Õ, R RootToLeaf). The signals on the leaves are used to do the final prediction. Applying to objects of various materials. We define the interaction graph and the propagation rules on particles for different types of objects as follows:• Rigid bodies. All the particles in a rigid body are globally coupled; hence for each rigid object, we define a hierarchical model to propagate the effects. After the multi-stage propagation, we average the signals on the particles to predict a rigid transformation (rotation and translation) for the object. The motion of each particle is calculated accordingly. For each particle, we also include its offset to the center-of-mass to help determine the torque.• Elastic/Plastic objects. For elastically deforming particles, only using the current position and velocity as the state is not sufficient, as it is not clear where the particle will be restored after the deformation. Hence, we include the particle state with the resting position to indicate the place where the particle should be restored. When coupled with plastic deformation, the resting position might change during an interaction. Thus, we also infer the motion of the resting position as a part of the state prediction. We use hierarchical modeling for this category but predict next state for each particles individually.• Fluids. For fluid simulation, one has to enforce density and incompressibility, which can be effectively achieved by only considering a small neighborhood for each particle BID17. Therefore, we do not need hierarchical modeling for fluids. We build edges dynamically, connecting a fluid particle to its neighboring particles. The strong inductive bias leveraged in the fluid particles allows good performance even when tested on data outside training distributions. For the interaction between different materials, two directed edges are generated for any pairs of particles that are closer than a certain distance. Model-based methods offer many advantages when comparing with their model-free counterparts, such as generalization and sample efficiency. However, for cases where an accurate model is hard to specify or computationally prohibitive, a data-driven approach that learns to approximate the underlying dynamics becomes useful. Function approximators such as neural networks are naturally differentiable. We can rollout using the learned dynamics and optimize the control inputs by minimizing a loss between the simulated and a target configuration. In cases where certain physical parameters are unknown, we can perform online system identification by minimizing the difference between the model's prediction and the reality. An outline of our algorithm can be found in Section A.Model predictive control using shooting methods. Let's denote G g as the goal andû 1:T be the control inputs, where T is the time horizon. The control inputs are part of the interaction graph, such as the velocities or the initial positions of a particular set of particles. We denote the ing trajectory after applyingû as G = {G i} i=1:T. The task here is to determine the control inputs as to minimize the distance between the actual outcome and the specified goal L goal (G, G g).Our dynamic particle interaction network does forward simulation by taking the dynamics graph at time t as input, and produces the graph at next time step,Ĝ t+1 = Φ(G t), where Φ is implemented as DPI-Nets as described in the previous section. Let's denote the the history until time t as G = {G i} i=1...t, and the forward simulation from time step t asĜ = {Ĝ i} i=t+1...T. The loss L goal (Ḡ ∪Ĝ, G g) can be used to update the control inputs by doing stochastic gradient descent (SGD). This is known as the shooting method in trajectory optimization BID29 ).The learned model might deviate from the reality due to accumulated prediction errors. We use Model-Predictive Control (MPC) BID2 to stabilize the trajectory by doing forward simulation and updating the control inputs at every time step to compensate the simulation error. Online adaptation. In many real-world cases, without actually interacting with the environment, inherent attributes such as mass, stiffness, and viscosity are not directly observable. DPI-Nets can estimate these attributes on the fly with SGD updates by minimizing the distance between the predicted future states and the actual future states L state (Ĝ t, G t). We evaluate our method on four different environments containing different types of objects and interactions. We will first describe the environments and show simulation . We then present how the learned dynamics helps to complete control tasks in both simulation and the real world. FluidFall (Figure 2a). Two drops of fluids are falling down, colliding, and merging. We vary the initial position and viscosity for training and evaluation. BoxBath (Figure 2b). A block of fluids are flushing a rigid cube. In this environment, we have to model two different materials and the interactions between them. We randomize the initial position of the fluids and the cube to test the model's generalization ability. FluidShake (Figure 2c). We have a box of fluids and the box is moving horizontally, The speed of the box is randomly selected at each time step. We vary the size of the box and volume of the fluids to test generalization. RiceGrip (Figure 2d). We manipulate an object with both elastic and plastic deformation (e.g., sticky rice). We use two cuboids to mimic the fingers of a parallel gripper, where the gripper is initialized at a random position and orientation. During the simulation of one grip, the fingers will move closer to each other and then restore to its original positions. The model has to learn the interactions between the gripper and the "sticky rice", as well as the interactions within the "rice" itself. We use all four environments in evaluating our model's performance in simulation. We use the rollout MSE as our metric. We further use the latter two for control, because they involve fully actuated external shapes that can be used for object manipulation. In FluidShake, the control task requires determining the speed of the box at each time step, in order to make the fluid match a target configuration within a time window; in RiceGrip, the control task corresponds to select a sequence of grip configurations (position, orientation, closing distance) to manipulate the deformable object as to match a target shape. The metric for performance in control is the Chamfer distance between the manipulation and the target configuration. We present implementation details for dynamics learning in the four environment and perform ablation studies to evaluate the effectiveness of the introduced techniques. Implementation details. For FluidFall, we dynamically build the interaction graph by connecting each particle to its neighbors within a certain distance d. No hierarchical modeling is used. For BoxBath, we model the rigid cube as in Section 3.2, using multi-stage hierarchical propagation. Two directed edges will be constructed between two fluid particles if the distance between them is smaller than d. Similarly, we also add two directed edge between rigid particles and fluid particles when their distance is smaller than d. For FluidShake, we model fluid as in Section 3.2. We also add five external particles to represent the walls of the box. We add a directed edge from the wall particle to the fluid particle when they are closer than d. The model is a single propagation network, where the edges are dynamically constructed over time. For RiceGrip, we build a hierarchical model for rice and use four propagation networks for multi-stage effect propagation (Section 3.2). The edges between the "rice" particles are dynamically generated if two particles are closer than d. Similar to FluidShake, we add two external particles to represent the two "fingers" and add an edge from the "finger" to the "rice" particle if they are closer than the Figure 2: Qualitative on forward simulation. We compare the ground truth (GT) and the rollouts from HRN BID19 and our model (DPI-Net) in four environments (FluidFall, BoxBath, FluidShake, and RiceGrip). The simulations from our DPI-Net are significantly better. We provide zoom-in views for a few frames to show details. Please see our video for more empirical .distance d. As "rice" can deform both elastically and plastically, we maintain a resting position that helps the model restore a deformed particle. The output for each particle is a 6-dim vector for the velocity of the current observed position and the resting position. More training details for each environment can be found in Section D. Details for data generation are in Section C. Table 1: Quantitative on forward simulation. MSE (×10 −2) between the ground truth and model rollouts. The hyperparameters used in our model are fixed for all four environments. FluidFall and FluidShake involve no hierarchy, so DPI-Net performs the same as the variant without hierarchy. DPI-Net significantly outperforms HRN BID19 in modeling fluids (BoxBath and FluidShake) due to the use of dynamic graphs. Results. Qualitative and quantitative are in Figure 2 and Table 1. We compare our method (DPI-Net) with three baselines, Interaction Networks BID0, HRN BID19, and DPI-Net without hierarchy. Note that we use the same set of hyperparameters in our model for all four testing environments. Specifically, Interaction Networks (IN) consider a complete graph of the particle system. Thus, it can only operate on small environments such as FluidFall; it runs out of memory (12GB) for the other three environments. IN does not perform well, because its use of a complete graph makes training difficult and inefficient, and because it ignores influence propagation and long-range dependence. Without a dynamic graph, modeling fluids becomes hard, because the neighbors of a fluid particle changes constantly. Table 1 shows that for environments that involve fluids (BoxBath and FluidShake), DPI-Net performs better than those with a static interaction graph. Our model also performs better in scenarios that involve objects of multiple states (BoxBath, Figure 2b), because it uses state-specific modeling. Models such as HRN BID19 aim to learn a universal dynamics model for all states of matter. It is therefore solving a harder problem and, for this particular scenario, expected to perform not as well. When augmented with state-specific modeling, HRN's performance is likely to improve, too. Without hierarchy, it is hard to capture long-range dependence, leading to performance drop in environments that involve hierarchical object modeling (BoxBath and RiceGrip).Appendix B includes on scenarios outside the training distribution (e.g., more particles). DPI-Net performs well on these out-of-sample cases, successfully leveraging the inductive bias. Ablation studies. We also test our model's sensitivity to hyperparameters. We consider three of them: the number of roots for building hierarchy, the number of propagation steps L, and the size of the neighborhood d. We test them in RiceGrip. As can be seen from the shown in FIG4, DPI-Nets can better capture the motion of the "rice" by using fewer roots, on which the information might be easier to propagate. Longer propagation steps do not necessarily lead to better performance, as they increases training difficulty. Using larger neighborhood achieves better , but makes computation slower. Using one TITAN Xp, each forward step in RiceGrip takes 30ms for d = 0.04, 33ms for d = 0.08, and 40ms for d = 0.12. We also perform experiments to justify our use of different motion predictors for objects of different states. FIG4 shows the of our model vs. a unified dynamics predictor for all objects in BoxBath. As there are only a few states of interest (solids, liquids, and soft bodies), and their physical behaviors are drastically different, it is not surprising that DPI-Nets, with state-specific motion predictors, perform better, and are equally efficient as the unified model (time difference smaller than 3ms per forward step). We leverage dynamic particle interaction network for control tasks in both simulation and real world. Because trajectory optimization using shooting method can easily stuck to a local minimum, we first randomly sample N sample control sequences, and select the best performing one according to the rollouts of our learned model. We then optimize it via shooting method using our model's gradients. We also use online system identification to further improve the model's performance. FIG5 and FIG6 show qualitative and quantitative , respectively. More details of the control algorithm can be found in Section E.FluidShake. We aim to control the speed of the box to match the fluid particles to a target configuration. We compare our method (RS+TO) with random search over the learned model (without trajectory optimization -RS) and Model-free Deep Reinforcement Learning (Actor-Critic method optimized with PPO (RL). FIG6 suggests that our model-based control algorithm outperforms both baselines with a large margin. Also RL is not sample-efficient, requiring more than 10 million time steps to converge while ours requires 600K time steps. RiceGrip. We aim to select a sequence of gripping configurations (position, orientation, and closing distance) to mold the "sticky rice" to a target shape. We also consider cases where the stiffness of the rice is unknown and need to be identified. FIG6 shows that our dynamic particle interaction network with system identification performs the best, and is much more efficient than RL (150K vs. RiceGrip in the real world. We generalize the learned model and control algorithm for RiceGrip to the real world. We first reconstruct object geometry using a depth camera mounted on our Kuka robot using TSDF volumetric fusion BID4 . We then randomly sampled N fill particles within the object mesh as the initial configuration for manipulation. FIG5 and FIG6 shows that, using DPI-Nets, the robot successfully adapts to the real world environment of unknown physical parameters and manipulates a deformable foam into various target shapes. The learned policy in RiceGrip does not generalize to the real world due to domain discrepancy, and outputs invalid gripping configurations. We have demonstrated that a learned particle dynamics model can approximate the interaction of diverse objects, and can help to solve complex manipulation tasks of deformable objects. Our system requires standard open-source robotics and deep learning toolkits, and can be potentially deployed in household and manufacturing environment. Robot learning of dynamic scenes with particle-based representations shows profound potentials due to the generalizability and expressiveness of the representation. Our study helps lay the foundation for it. A CONTROL ALGORITHM Update A by descending with the gradients ∇ A L state (Ĝ t, G t) Forward simulation using the current graphĜ t+1 ← Φ(G t) Make a buffer for storing the simulation G ←Ḡ ∪Ĝ t+1 for i = t + 1,..., T − 1 do Forward simulation:Ĝ j+1 ← Φ(Ĝ j); G ← G ∪Ĝ j+1 end for Updateû t:T by descending with the gradients ∇û t: DISPLAYFORM0 We show our model's performance on fluids, rigid bodies, and deformable objects with a larger number of particles than they have in the training set. Figure 6 shows qualitative and quantitative . Our model scales up well to larger objects. Figure 6: Extrapolate generalization on fluids, rigid bodies, and deformable objects. The performance is evaluated by the MSE (×10 −2) between the ground truth and rollouts from DPI-Nets. The blue bars denote the range of particle numbers that have been seen during training, which indicate interpolation performance. The red bars indicate extrapolation performance that our model can generalize to cases containing two times more particles than cases it has been trained on.clusterPlasticCreep is uniformly sampled between 0.1 and 0.3. The position of the gripper is randomly sampled within a circle of radius 0.5. The orientation of the gripper is always perpendicular to the line connecting the origin to the center of the gripper and the close distance is uniformly sampled between 0.7 to 1.0.Of all the generated data, 90% of the rollouts are used for training, and the rest 10% are used for validation. The models are implemented in PyTorch, and are trained using Adam optimizer BID12 ) with a learning rate of 0.0001. The number of particles and relations might be different at each time step, hence we use a batch size of 1, and we update the weights of the networks once every 2 forward rounds. The neighborhood d is set as 0.08, and the propagation step L is set as 2 for all four environments. For hierarchical modeling, it does not make sense to propagate more than one time between leaves and roots as they are disjoint particle sets, and each propagation stage between them only involves one-way edges; hence φ LeafToLeaf uses L = 2. φ LeafToRoot uses L = 1. φ RootToRoot uses L = 2, and φ RootToLeaf uses L = 1.For all propagation networks used below, the object encoder f FluidFall. The model is trained for 13 epochs. The output of the model is the 3 dimensional velocity, which is multiplied by ∆t and added to the current position to do rollouts. BoxBath. In this environment, four propagation networks are used due to the hierarchical modeling and the number of roots for the rigid cube is set as 8. We have two separate motion predictor for fluids and rigid body, where the fluid predictor output velocity for each fluid particle, while the rigid predictor takes the mean of the signals over all its rigid particles as input, and output a rigid transformation (rotation and translation). The model is trained for 5 epochs. FluidShake. Only one propagation network is used in this environment, and the model is trained for 5 epochs. RiceGrip. Four propagation networks are used due to the hierarchical modeling, and the number of roots for the "rice" is set as 30. The model is trained for 20 epochs. N sample is chosen as 20 for all three cases, where we sample 20 random control sequences, and choose the best performing one as evaluated using our learned model. The evaluation is based on the Chamfer distance between the controlling and the target configuration. FluidShake. In this environment, the control sequence is the speed of the box along the x axis. The method to sample the candidate control sequence is the same as when generating training data of this environment. After selected the best performing control sequence, we first use RMSprop optimizer to optimize the control inputs for 10 iterations using a learning rate of 0.003. We then use model-predictive control to apply the control sequence to the FleX physics engine using Algorithm 1.RiceGrip. In this environment, we need to come up with a sequence of grip configurations, where each grip contains positions, orientation, and closing distance. The method to sample the candidate control sequence is the same as when generating training data of this environment. After selected the best performing control sequence, we first use RMSprop optimizer to optimize the control inputs for 20 iterations using a learning rate of 0.003. We then use model-predictive control to apply the control sequence to the FleX physics engine using Algorithm 1.RiceGrip in Real World. In this environment, we need to come up with a sequence of grip configurations, where each grip contains positions, orientation, and closing distance. The method to sample the candidate control sequence is the same as when generating training data of RiceGrip, and N fill is chosen as 768. Different from the previous case, the physical parameters are always unknown and has to be estimated online. After selected the best performing control sequence, we first use RMSprop optimizer to optimize the control inputs for 20 iterations using a learning rate of 0.003. We then use model-predictive control to apply the control sequence to the real world using Algorithm 1. | Learning particle dynamics with dynamic interaction graphs for simulating and control rigid bodies, deformable objects, and fluids. | 1,291 | scitldr |
Generative Adversarial Networks (GANs), when trained on large datasets with diverse modes, are known to produce conflated images which do not distinctly belong to any of the modes. We hypothesize that this problem occurs due to the interaction between two facts: For datasets with large variety, it is likely that the modes lie on separate manifolds. The generator (G) is formulated as a continuous function, and the input noise is derived from a connected set, due to which G's output is a connected set. If G covers all modes, then there must be some portion of G's output which connects them. This corresponds to undesirable, conflated images. We develop theoretical arguments to support these intuitions. We propose a novel method to break the second assumption via learnable discontinuities in the latent noise space. Equivalently, it can be viewed as training several generators, thus creating discontinuities in the G function. We also augment the GAN formulation with a classifier C that predicts which noise partition/generator produced the output images, encouraging diversity between each partition/generator. We experiment on MNIST, celebA, STL-10, and a difficult dataset with clearly distinct modes, and show that the noise partitions correspond to different modes of the data distribution, and produce images of superior quality. Generative Adversarial Networks BID8 are powerful generative models that have enjoyed significant attention from the research community in the past few years. Despite several successes, the original formulation for GANs is widely acknowledged to be notoriously difficult to train due to instability issues. In particular, GANs face the mode collapse problem, where the generator resorts to generating a handful of samples which are assigned high probability by the discriminator. Several methods have been introduced to fix the mode collapse problem. BID3,, BID9, BID15, BID21 Despite improvements, state-of-art GANs still fail to generate meaningful samples on diverse and complex datasets such as ImageNet BID5. GANs trained on such datasets produce conflated images which do not distinctly belong to any of the modes present in the dataset. We hypothesize that this problem occurs due to the continuous nature of the generator function, along with the connectedness of the latent noise space, due to which the output set of the generator is also connected. This poses a problem when dealing with complex real life datasets with varied modes. Strong empirical and theoretical evidence suggests that real life images lie on lowdimensional manifolds BID17. It is highly probable that distinct modes (say bedroom images and human face images) lie on disjoint manifolds. If we assume that the generator does not suffer from the mode dropping problem, it must cover all these manifolds in its output. However, the output set being connected, must contain parts which do not belong to any of the manifolds, but simply join them. We refer to such parts of the output as tunnels, since they connect otherwise disjoint manifolds. Tunnels do not resemble any of the images in the dataset, and are not similar to any of the modes. They correspond to the conflated images produced by the generator, and are undesirable. By this reasoning, we suggest that GANs with continuous generators and connected latent noise sets must suffer either from a certain degree of mode dropping or from producing conflated, garbled outputs when trained on complex and varied datasets like ImageNet. We develop methods that allow GANs to cover disjoint manifolds without the use of tunnels, while not compromising on mode coverage. Our approach is to create learnable discontinuities in the latent noise space. This is done by learning N different linear mappings (partitions) in the input layer of the generator. A noise vector (sampled from the standard normal distribution), gets mapped to N different vectors by the input layer, and the rest of the processing remains the same as in standard generators. The output set of each mapping is a connected set, but the union of the N output sets could potentially be disconnected. Thus, we break the connectedness assumption leading to the existence of tunnels. To facilitate learning distinct modes by each partition, we introduce a classifier that predicts which partition created a given input. We modify the loss functions to adjust for this change. We experiment on standard datasets: MNIST , celebA BID14, STL-10 (a subset of ImageNet) BID4, and a tough artificial dataset with very distinct modes -an equal mixture of LSUN BID22 bedrooms and celebA, to verify the efficacy of our method. We compare our with one of the best performing GAN variant BID9, and show an improvement in quality. The major contributions of the paper are summarized below:1. We identify a key problem with training GANs on large & diverse datasets, and provide intuition to explain its cause 2. We develop theoretical analyses to support and introduce rigor in the intuitions provided 3. Motivated by these analyses, we introduce a novel GAN setup to alleviate the problem 4. We experiment on a variety of standard datasets and report improvements over state-of-art formulations 2 RELATED WORK BID8 formulated GAN as a minimax game between two neural networks: generator G θ and discriminator D φ. G θ takes a random noise vector z as input and generates sample G θ (z), while D φ identifies whether input sample is real or generated by the generator G θ. Both G θ and D φ play a two-player minimax game with value function V (G, D): DISPLAYFORM0 where P r (x) is the real data distribution, and P(z) is arbitrary noise distribution (typically uniform or normal distribution). In practice, training GANs using above formulation is highly unstable and requires careful balance of generator and discriminator updates. BID19 proposed a class of CNNs called DCGANs (Deep Convolutional GANs) with certain architectural specifications, and demonstrated better image quality than non-convolutional vanilla GAN architecture. BID6 used Laplacian pyramid framework for the generator, where a separate generative convnet model is trained using GAN approach at each level of pyramid, to generate images in coarse-to-fine fashion. Despite better architectures, GANs suffered from problems like unstable training, vanishing gradients of generator, mode collapse. BID20 proposed several heuristics such as feature matching, minibatch discrimination, historical averaging, label smoothing, primarily to stabilize GAN training. BID3 observed that GAN training can push probability mass in wrong direction, hence are prone to missing modes of data. They proposed regularization techniques to stabilize GAN training and alleviate mode missing problem by fair distribution of probability mass across modes of the real data distribution. BID1 provided theoretical analysis of training dynamics of GANs, and problems including instability and saturation. They revealed fundamental problems with original GAN formulation and provided directions towards solving them. Several papers proposed alternative objective function of generator and discriminator., BID9 proposed new loss function which approximately minimizes Wasserstein distance between real and generated data distribution instead of Jensen Shannon Divergence. They claim their formulation does not require careful balance between generator and discriminator updates, thus lead to stable training without saturating the gradients. They observed no evidence of mode collapse in their experiments. BID15 used squared-loss instead of log-loss in original formulation, which provides generator with better non-vanishing gradients. BID23 view discriminator as an energy function making it possible to use additional loss functions other than logistic output binary classifier, which was found to stabilize GAN training. BID21 propose to train discriminator based on linear separability between hidden representation of real and generated samples and train generator based on decision hyperplanes between hidden representations computed using Linear Discriminant Analysis. For labelled datasets, BID16, BID18 employed label conditioning in both generator and discriminator to generate discriminable and diverse samples across classes. While this helps produce better samples for complex datasets, it requires the presence of labelled data. In this paper we propose methods to improve performance of GANs on complex datasets without making use of labels. In this section, we further develop the ideas from the introduction. We also provide theoretical analyses to lend support to these ideas. Theoretical analyses and empirical studies suggest that probability distributions of real images (denoted by P r) have supports that lie on low dimensional manifolds BID1, BID17 ).We choose to represent the support set S of distribution P r by the set of its connected components, i.e. the set of maximal connected subsets of S. These components are disjoint and their union is S. In other words, they form a partition of S. As suggested earlier, each component is a lowdimensional manifold in high-dimensional space. Throughout this paper, we use the terms manifold of S and connected component of S interchangeably, unless mentioned otherwise. Consider highly distinct images m 1, m 2 of a complex and varied dataset. These images cannot be path-connected, since that would imply the existence of a smooth sequence of images in S starting from m 1, and slowly transitioning to m 2. Such a sequence clearly does not exist for very distinct m 1, m 2, e.g. in a dataset consisting of face and bedroom images, if m 1 is a face, and m 2 is a bedroom, it is not possible for a sequence of realistic facial and bedroom images to smoothly transition from m 1 to m 2. Since path-connectedness and connectedness are equivalent properties in open subsets of R n, the manifolds on which m 1, m 2 lie, must be disconnected, and hence must be separate components of S.We summarize the above discussion with the following :Result 1. Sufficiently distinct images from the support set of real-life image probability distributions must lie on disconnected manifolds. As a corollary, we have:Result 2. The support set of a real-life image probability distribution must have at least as many connected components as the size of the maximal set of (sufficiently, pairwise) distinct modes. The generator is often formulated as a continuous function of the input latent noise vector. Moreover, the probability distribution from which the noise vector is drawn is usually a simple distribution like Gaussian or uniform, which have connected support sets. Finally, we observe that a continuous function, applied to a connected set in an output set which is connected. Thus we have:Result 3. The output set produced by a continuous generator acting on a latent noise distribution with connected support set must be connected. From the previous discussions, we know that the output of the generator must be connected, while diverse distributions must lie on supports consisting of several disconnected manifolds. If the generator does not suffer from mode dropping, then it must cover all the distinct modes in its output set. However, this implies the existence of parts in the generator's output which connect these disconnected manifolds. We call such parts of the output space as tunnels. Since tunnels do not exist in the real distribution, they must correspond to unrealistic images. We condense these ideas to the following : Result 4. A continuous generator, with inputs drawn from a connected set, must either suffer from significant mode dropping or generate unrealistic images to some degree. In the rest of the discussion, we assume that mode dropping is not a significant problem. This is a realistic assumption due to several heuristics and formulations that alleviate this problem BID20, BID3,, BID9, BID21 ). We concentrate on the problem of generation of unrealistic, distorted outputs. If we assume that the generator is K−Lipschitz (this happens with a variety of regularization terms added to the generator's loss), i.e. for any two points z 1, z 2 in the latent noise space we have DISPLAYFORM0 then the generator's output must gradually shift from one manifold to another as we travel in Z space, since the slope is bounded. The region of shift does not belong to any outputs in the real distribution. It is simple to see that some measure of the probability distribution Z is wasted on mapping these unwanted outputs. To demonstrate this, we consider the simple case where P r consists of equal measure of samples of type A and type B, both of which are disconnected and highly distinct from each other. For simplicity, we assume that the latent noise z is drawn from a 1-D region [−1, 1], with uniform probability. Let the distance between set A and set B (defined as inf a∈A,b∈B a − b) be β (> 0 due to high distinction between the sets). Let M A, M B be subsets of Z mapping to A and B respectively. For any arbitrary z 1 ∈ M A and z 2 ∈ M B, from the Lipschitz condition, we have: DISPLAYFORM1 Hence, the distance between any two points of M A and M B must be at least β K, as a of which, the distance between the sets M A and M B is at least β K. Clearly, in our hypothetical case, this in a gap of at least β K wherever M A ends and M B begins. Since we have assumed uniform distribution, a probability measure of β 2K is lost to undesired outputs. It is well-known that conditional GANs BID16, BID18 ) are better at learning complex datasets. The label information incorporated in the inputs of the generator plays a large role in making better outputs. However, we also believe that the discontinuity introduced by the one-hot label representations contributes significantly to improving performance. More concretely, the input to the generator is a latent noise vector, along with the one-hot label representation. Hence, the input space is partitioned into n (number of classes) disconnected components due to the discrete nature of the one-hot labels. This breaks the connectedness assumption of the input space, which was a crucial part of the problem's cause. Note however that conditional GANs require labeled inputs, which may not be available for many datasets. Central to the in section 3 were the assumptions of continuity of the generator and the connectedness of the support set of the latent noise distribution. Breaking either of these assumptions could lead to potential solutions. We now describe novel GAN formulations that breaks these assumptions while not requiring any labeled data. We create trainable discontinuities in the latent noise space by learning N different linear mappings {L 1, L 2 . . . L N} in the input layer of the generator. A noise vector z gets mapped to N vectors {y 1 = L 1 (z)... y N = L N (z)} by the input layer, and the rest of the processing remains the same as in standard generators. We end with N outputs DISPLAYFORM0 Each of the linear layers L i maps the input latent noise space Z to a connected output set O i. While O i is a connected set, the union of the N output sets (O 1 . . . O N) could potentially be disconnected (or each mapping could collapse to the same matrix, if required by the data).This approach of partitioning the noise space can also be seen as training N different generators, with shared parameters, except for the input layer. In this view, the generator function has become potentially discontinuous due to indexability of the outputs (i.e. choosing which output to take).In either view, we have broken one of the assumptions leading to the existence of tunnels. Finally, to facilitate the learning of distinct modes by each partition (or generator), we introduce a classifier C that predicts which partition (or generator) created a given input. We modify the GAN value function to suitably account for this change. We would like to emphasize that this formulation is generic, and can be plugged in with different types of generators, discriminators, partition mappings, and classifiers. Moreover, any improvements in GAN formulations can be independently incorporated into our setup. However, we do make specific design choices for the purpose of this paper, which we describe below. To reduce the cost of training, we take C to be a linear mapping operating on the last hidden layer of the discriminator. We believe that the discriminator extracts useful abstract features for judging the samples produced by the generator, and these can be effectively reused for partition classification. We add the partition classification loss to the existing generator and discriminator losses from the chosen base formulation. The new loss functions are: DISPLAYFORM0 L D, L G are the original loss functions for the discriminator and generator respectively, in the base GAN formulation. For vanilla GANs, DISPLAYFORM1 Here L c (y, C(x)) is the cross-entropy classification loss for input image x, which was generated by partition y, and C(x) is the classifier's softmax output vector.α is a hyperparameter introduced to control the relative importance of generating good samples w.r.t encouraging diversity between the outputs of different partitions. Finally, we must describe the exact meaning of P g in our formulation, since there are N generators. We sample the generator at each training step uniformly, thus making P g = 1 N N i=1 P gi, where P gi is the probability distribution induced by generator i. We now propose a simpler method for implementing a softer version of disconnectedness for the latent noise distribution. In this method, the noise vectors are drawn from a mixture-of-Gaussians with trainable means (µ 1, . . ., µ N) and covariance matrices FIG0,..., Diag(σ N)). If required by the data, the means can move sufficiently far away from each other during training, leading to an almost disconnected distribution. Each Gaussian is given equal probability weight, i.e., it is chosen uniformly at random. Our implementation makes use of the reparameterization trick BID12, BID7 ). We sample z ∼ N (0, I), along with an index i ∼ U nif orm({1, . . ., N}). We then usê z = z σ i + µ i as the latent vector for generating the sample. We let the gradients backpropagate to µ i and σ i, allowing for a learnable latent distribution. Additionally, as mentioned before, we may use a classifier to predict the Gaussian that generated a given sample, hence encouraging diversity between different Gaussians. We experiment with the image generation task on MNIST, celebA consisting of facial images, STL-10, and an artificial dataset called CelebRoom consisting of 100, 000 images randomly sampled from celebA and 100, 000 images randomly sampled from LSUN bedroom dataset. CelebRoom was constructed with the explicit goal of including diverse modes, making it difficult for GANs to train. As in BID9, we present on a toy dataset sampled from an equal mixture 8 bivariate Gaussians. The means are arranged uniformly in a circle with radius 2, and the covariance matrices are set to 0.02I. We modified a popular TensorFlow BID0 implementation of the WGAN-GP architecture 1 for all experiments. N is fixed for each dataset. We use N = 10 for MNIST, and N = 8 for celebA, STL-10, and CelebRoom.α is fixed to 1.0 for all experiments with multiple partitions. We use the ADAM optimizer BID11 for all experiments. For each dataset (except MNIST), we compare the samples obtained using the following setups:For MNIST and celebRoom, we also present generated from the mixture-of-Gaussians latent distribution method. We used the hyperparameters suggested by the original papers for each architecture. The ResNet BID10 architecture used has 4 residual blocks, as described in BID9.We avoid intensive hyperparameter tuning in order to encourage discovery of methods with relatively high hyperparameter insensitivity. We also tried training multiple partitions with the Least Squares GAN BID15 formulation, but noticed that the training quickly diverged. Hence we do not report these here. We present samples on the five datasets. For samples generated using multiple partitions, each column represents the images produced by a single partition. We observe that Vanilla GAN is unable to cover all Gaussians. As training proceeds, it shuttles between covering different subsets of Gaussians. WGAN-GP takes a long time to converge. It is finally able to cover all Gaussians, but not satisfactorily, with several samples lying between different means. We note that by reducing the gradient penalty significantly, we are able to achieve faster convergence. However, we chose to use the recommended hyperparameters for all formulations. Vanilla GAN with partitions is able to cover all the modes, and converges quickly. We observe a slight improvement in quality in the samples generated by our model when using the partition classifier. The images produced with the classifier formulation are more distinct, sharper, and do not contain spurious appendages. We expected each partition to produce a single type of digit, but this does not seem to be the case. Despite this, we do note that the partition classifier's loss was very close to zero upon successful training. This signifies that the partitions did learn to map distinct types of outputs, but not by digit label. This can be explained by the fact that labels are not the only way to classify digits. There are several combinations of digit style, tilt, width etc. that could be captured by the partitions. We observed slight improvements in quality in the samples generated by our model over the WGAN + Gradient Penalty baseline with the DCGAN architecture. FIG0: 64 × 64 celebRoom samples generated using DCGAN architecture with mixture-ofGaussians latent distribution using: no prediction of Gaussians (left), prediction of Gaussians (right)Each partition seems to produce similar types of images, both with and without a partition classifier formulation to encourage diversity. For instance, the fifth partition (fifth column) in DCGAN architecture and WGAN-GP loss seems to capture the mode of smiling women, while the eigth partition (last column) seems to generate faces of men (often bespectacled).We also note that the ResNet architecture is unable to train successfully with partitions. We notice heavy mode collapse, and noisy outputs. Our experiments show heavy conflation of modes in the outputs generated by all GAN formulations. However, this problem is ameliorated to some extent with our partition GAN.In particular, we notice some partitions capturing distinctly well-formed modes. The DCGAN architecture with WGAN-GP loss with 8 partitions shows several such examples. The first partition (first column) seems to create images of vehicles, while the sixth partition (sixth column) seems to generate images of oceans and open skies (along with boats, ships, and airplanes). Similar clustering can also be observed with the ResNet architecture with WGAN-GP loss, where partition four (column four) generates images of birds and planes, while the sixth partition (column six) generates images of animals. We observe significant conflation of bedrooms and facial images in this dataset with the baseline model. Our model alleviates this problem to some extent, but does not solve it completely. However, we see that each partition does either primarily create faces or bedrooms. We would like to note that if a particular partition generates bad quality samples, it might be due to the inherent difficulty in creating that portion of the real distribution. In such cases, we can drop the outputs from that partition at the cost of not capturing 1 N of the distribution. We highlighted a major problem in training GANs on complex image datasets and introduced theoretical analysis for the problem of generation of unrealistic, conflated images in such cases. We proposed the addition of discontinuity in latent noise space of the generator for covering disjoint and diverse modes of the data distribution, and augmented the loss functions to encourage diversity. We showed improvements over existing models without much hyperparameter tuning. In future, we hope to perform an extensive exploration of the search space to obtain a set of hyperparameters along with better methods to introduce discontinuities in the generator that perform well on a variety of datasets, while significantly improving image quality. | We introduce theory to explain the failure of GANs on complex datasets and propose a solution to fix it. | 1,292 | scitldr |
To leverage crowd-sourced data to train multi-speaker text-to-speech (TTS) models that can synthesize clean speech for all speakers, it is essential to learn disentangled representations which can independently control the speaker identity and noise in generated signals. However, learning such representations can be challenging, due to the lack of labels describing the recording conditions of each training example, and the fact that speakers and recording conditions are often correlated, e.g. since users often make many recordings using the same equipment. This paper proposes three components to address this problem by: formulating a conditional generative model with factorized latent variables, using data augmentation to add noise that is not correlated with speaker identity and whose label is known during training, and using adversarial factorization to improve disentanglement. Experimental demonstrate that the proposed method can disentangle speaker and noise attributes even if they are correlated in the training data, and can be used to consistently synthesize clean speech for all speakers. Ablation studies verify the importance of each proposed component. Recent development of neural end-to-end TTS models BID26 BID1 enables control of both labelled and unlabelled speech attributes by conditioning synthesis on both text and learned attribute representations BID27 BID21 BID10 BID0 BID5 BID9. This opens the door to leveraging crowd-sourced speech recorded under various acoustic conditions BID18 to train a high-quality multi-speaker TTS model that is capable of consistently producing clean speech. To achieve this, it is essential to learn disentangled representations that control speaker and acoustic conditions independently. However, this can be challenging for two reasons. First, the underlying acoustic conditions of an utterance, such as the type and level of noise and reverberation, are difficult to annotate, and therefore such labels are often unavailable. This hinders the use of direct conditioning on the acoustic condition labels in a way similar to conditioning on one-hot speaker labels BID1. Second, speaker identity can have strong correlations with recording conditions, since a speaker might make most of their recordings in the same location using the same device. This makes it difficult to learn a disentangled representation by assuming statistical independence BID6.We address this scenario by introducing three components: a conditional generative model with factorized latent variables to control different attributes, data augmentation by adding noise to training utterances in order to counteract the inherent speaker-noise correlation and to create ground truth noisy acoustic condition labels, and adversarial training based on the generated labels to encourage disentanglement between latent variables. We utilize the VCTK speech synthesis dataset BID23, and noise signals from the CHiME-4 challenge BID24 to synthesize a dataset containing correlated speaker and noise conditions for controlled experiments. We extensively evaluate disentanglement performance on the learned latent representations as well as the synthesized samples. Experimental identify the contribution of each component, and demonstrate the ability of the proposed model to disentangle noise from speakers and consistently synthesize clean speech for all speakers, despite the strong correlation in the training data. We base our TTS model on Tacotron 2 BID20, which takes a text sequence as input, and outputs a sequence of mel spectrogram frames. To control speech attributes other than text, two additional latent variables, z s and z r, are introduced to condition the generative process, where the former models speaker identity, and the latter models residual unlabelled attributes (e.g. acoustic conditions). Prior distributions for both variables are defined to be isotropic Gaussian. The full TTS model can be written as a conditional generative model with two latent variables: p(speech | z s, z r, text).Two variational distributions are introduced: q(z s | speech) and q(z r | speech), to approximate the intractable posteriors of the latent variables, following the variational autoencoder (VAE) framework BID14. Each distribution is defined to be diagonal-covariance Gaussian, whose mean and variance are parameterized by a neural network encoder. Note that z s, z r, and text are assumed to be conditionally independent given speech, in order to simplify inference. In contrast to learning an embedding for each speaker, learning an inference model for z s can be used to infer speaker attributes for previously unseen speakers. To factorize speaker and residual information, an auxiliary speaker classifier that takes z s as input is trained jointly with the TTS model. This encourages information that is discriminative between speakers to be encoded in z s, and leaves residual information to z r. A simple fully-connected network is used for the speaker classifier. When acoustic conditions are correlated with speakers, information about e.g. noise level can be used to discriminate between speakers, and therefore can be encoded into z s. To counteract such behavior, one can decorrelate these factors by leveraging prior knowledge that adding noise should not affect speaker identity. We propose to augment the original training set with a noisy copy that mixes each utterance with a randomly selected piece of noise at a randomly sampled signal-to-noise ratio (SNR), but reuses the same transcript and speaker label as the original utterance. This operation can be seen as flattening the SNR distribution of each speaker, in order to make SNRs less discriminative about speakers. To increase the degree of disentanglement, it is also useful to proactively discourage z s from encoding acoustic condition information. If the ground truth acoustic condition labels are available, domain adversarial training BID3 can be applied directly to encourage z s not to be informative about the acoustic condition. Nevertheless, such labels are often unavailable in crowdsourced datasets such as BID18.In order to utilize adversarial training in such a scenario, we propose to use the augmentation label (original/augmented) to replace the acoustic condition label (clean/noisy). This augmentation label can be seen as a noisy acoustic condition label: an augmented utterance must be noisy, but an original one can be either. If z s is invariant to acoustic conditions, then it is also invariant to augmentation labels, implying that the latter is a necessary condition for the former. Following BID3, invariance of z s to augmentation is measured using the empirical H-divergence between the z s distribution of the augmented data and that of the original data, given a hypothesis class H that is a set of binary classifiers. The empirical H-divergence measures how well the best classifier in the hypothesis class can distinguish between samples drawn from different distributions. However, it is generally hard to compute the empirical H-divergence. Following BID2 BID3, we approximate it with the Proxy A-distance: 2(1 − 2), where is a generalization error of an augmentation classifier trained to predict if z s is inferred from an augmented utterance. A simple fully-connected network is used for the augmentation classifier. The complete model is illustrated in FIG0, composed of three modules: a synthesizer, p(speech | z s, z r, text), an inference network with two encoders, q(z s | speech) and q(z r | speech), and an adversarial factorization module with speaker and augmentation classifiers, p(y s | z s) and p(y a | z r), where y s and y a denotes speaker and augmentation labels. The parameters of the synthesizer, the two encoders, the speaker classifier, and the augmentation classifiers are accordingly denoted as θ, φ s, φ r, ψ s, and ψ a, respectively. Training of the proposed model aims to maximize the conditional likelihood and the information z s contains about speakers, while minimizing the H-divergence between the z s inferred from the original utterances and that from the augmented ones. The H-divergence is approximated with the Proxy A-distance obtained from the augmentation classifier. The objective function can be formulated as combining an evidence lower bound (ELBO) with a domain adversarial training BID3 objective: DISPLAYFORM0 where λ 1, λ 2 > 0 are the loss weights for the two classifiers, and ELBO(θ, φ s, φ r ; speech, text) is formulated as: DISPLAYFORM1 Note that the augmentation classifier is optimized with a different objective than the rest of the model. To train the entire model jointly, a gradient reversal layer BID3 is inserted after the input to the augmentation classifier, which scales the gradient by −λ 2. Our formulation of a TTS model with latent variables are closely related to BID27 BID21 BID0 BID5 BID9, which focus on modeling unlabeled speech attributes. In contrast to this work, BID27 BID21 BID0 BID5 do not address disentangling attributes to enable independent control when different attributes are highly correlated in the training data, while BID9 learns to disentangle speaker attributes from the rest by encoding those with small within-speaker variance to z s.The proposed augmentation-adversarial training combines data augmentation for speech BID11 with domain adversarial neural networks (DANNs) BID3 for disentangling correlated attributes. These two methods have been mainly applied for training robust discriminative models BID7 BID22 BID24 BID19, and are less studied in the context of building generative models. In addition, our method provides two advantages. First, while DANNs require domain labels, our proposed method enables adversarial training even when the ground truth domain labels are unavailable. Second, domain adversarial training aims to remove domain information while preserving target attribute information; however, if domain and target attribute have very strong correlations, the two objectives conflict with each other, and one of the them will be compromised. Our proposed method alleviates such issues by using data augmentation to decorrelate the two factors. Learning disentangled representations for deep generative models has gain much interest recently BID8 BID28. Several studies also explored adversarial training for disentanglement, such as using maximum mean discrepancy BID15 and generative adversarial network BID17. We particularly emphasize disentangling statistically correlated attributes, and apply H-divergence based adversarial training on latent variables. We artificially generated a noisy speech dataset with correlated speaker and noise conditions using the VCTK corpus BID23 and noise from the CHiME-4 challenge BID24. The motivation here is to simulate real noisy data while evaluating the model under carefully controlled conditions. VCTK contains 44 hours of clean English speech from 109 speakers. We downsample the signals to 16 kHz to match the noise sample rate, and split it into training and testing sets in a 9:1 ratio. The CHiME-4 corpus contains 8.5 hours of noise recorded in four different locations (bus, cafe, pedestrian area, and street junction), which we split into three partitions: train, test, and aug. To simulate speaker-correlated noise, we randomly selected half the speakers to be noisy, and mixed all of their train and test utterances with noise sampled from train and test respectively, at SNRs ranging from 5 -25 dB. As described in Section 2.2, we generated an augmented set by mixing every (potentially noisy) training utterance with a noise signal sampled from aug at SNRs ranging from 5 -25 dB. Utterances in the augmented set are annotated with y a = 1, and those in the original noisy training set are annotated with y a = 0. We strongly encourage readers to listen to the samples on the demo page. The synthesizer network use the sequence-to-sequence Tacotron 2 architecture BID20, with extra input z s and z r concatenated and passed to the decoder at each step. If not otherwise mentioned, z s is 64-dim and z r is 8-dim. The generated speech is represented as a sequence of 80-dim mel-scale filterbank frames, computed from 50ms windows shifted by 12.5ms. We represent input text as a sequence of phonemes, since learning pronunciations from text is not our focus. The speaker and the residual encoders both use the same architecture which closely follow the attribute encoder in BID9. Each encoder maps a variable length mel spectrogram to two vectors parameterizing the mean and log variance of the Gaussian posterior. Both classifiers are fully-connected networks with one 256 unit hidden layer followed by a softmax layer to predict the speaker or augmentation posterior. The synthesizer, encoders, and speaker classifier are trained to maximize Eq with λ 1 = λ 2 = 1, and the augmentation classifier is trained to maximize Eq. The entire model is trained jointly with a batch size of 256, using the Adam optimizer BID13, configured with an initial learning rate of 10 −3, and an exponential decay that halves the learning rate every 12.5k steps, starting at 50k steps. We quantify the degree of disentanglement by training speaker and noise classifiers on z s and z r separately. The classification accuracy on a held-out set is used to measure how much information a latent variable contains about the prediction targets. A simple linear discriminative analysis classifier is used for all four tasks. If the classifier input contains no information about the target, the best a classifier can do is to predict the highest prior probability class. Since the distributions of both speaker and acoustic conditions are close to uniform, a speaker-uninformative input should in about 1% accuracy, and a noise-uninformative input should in about 50%.Results are shown in TAB1, comparing the full proposed model with two alternative models: one which removes adversarial training, denoted as "-adv,", and a second which further removes data augmentation, denoted as "-adv -aug." Without data augmentation and adversarial training, the second alternative completely fails to disentangle speaker from noise, i.e. its speaker encoding z s can infer both, while its residual encoding z s cannot infer either. The first alternative learns to encode acoustic condition into z r, reaching 96.5% accuracy on noise prediction; however, part of such information still leaks to z s, as indicated by the 85% noise prediction accuracy. The full proposed model achieves the highest noise prediction accuracy using z r, and the lowest accuracy using z s, implying the best allocation of acoustic information. Nevertheless, adversarial training also in slight degradation of speaker information allocation, where the speaker prediction accuracy using z r increases from 1.4% to 2.3%. We further analyze the latent space of the proposed model by visualizing the learned speaker and residual representations using t-SNE BID16, which is a technique for projecting high-dimensional vectors to a two-dimensional space. Results are shown in Figure 2, where each point corresponding to a projected z r (left column) or z s (right column) inferred from a single utterance. Points are color-coded according to speaker, gender, and accent labels in each row. In the left column, projected z r are clearly separated by acoustic condition, but not by gender or speaker. In contrast, projected z s shown in the right column forms many small clusters, with one speaker each cluster; Moreover, as shown in the middle row, clusters of speakers are further separated according to their genders. In the lower right panel, projected z s of noisy utterances and clean utterances are overlaid, demonstrating that z s have similar distributions conditioning on different acoustic conditions. To evaluate how well the two latent variables, z s and z r, can control the synthesized speech, we sample five clean speakers and five noisy speakers, and select one testing utterance for each speaker with duration ≥ 3s. For each of the ten utterances, the two latent variables are inferred using the corresponding encoders. We construct an evaluation set of 100 phrases that does not overlap with the VCTK corpus, and synthesize them conditioned on each combination of z r and z s, including those inferred from different utterances. The total 10,000 synthesized samples are divided into four groups, depending on the set of speakers (clean/noisy) z r and z s are inferred from. To quantify the ability to control noise, we use waveform amplitude distribution analysis (WADA) BID12 to estimate an SNR without a clean reference signal. We compare to a baseline multi-speaker Tacotron model, which removes the residual encoder and replaces the speaker encoder with a lookup table of 64-D speaker embeddings. The upper half of TAB2 presents the estimated SNRs of synthesized speech using this baseline, conditioning on the same five clean speakers and the five noisy speakers mentioned above. The difference in SNR between clean and noisy speakers indicates that the acoustic condition is tied to speaker identity in this baseline model. Results of the proposed model and the two alternatives mentioned in Section 4.2 are shown in the lower half of TAB2. By conditioning on z r inferred from clean utterances, the proposed model is able to synthesize clean speech even for noisy speakers whose training utterances all had noise. Moreover, when conditioning on the same set of z r, the proposed achieves the smallest discrepancy in SNR between different z s sets. On the other hand, the "-adv" variant has a larger discrepancy between different z s sets, indicating worse disentanglement comparing to the full model, while the "-adv-aug" variant fails to control noise through z r. These are in line with the noise prediction using z s and z r shown in TAB1. Figure 3 illustrates synthesized samples for a noisy speaker, comparing the baseline to our proposed model. Our model is capable of controlling noise using z r, and can generate clean speech for the noisy speaker, while the baseline output always contains noise. We next examine if z s can control the speaker identity of synthesized speech, using a text-independent speaker verification system BID25 to is compute speaker discriminative embeddings, called d-vectors BID4, from the reference and synthesized speech samples. The system is trained to optimize a generalized end-to-end speaker verification loss, so that the embeddings of two utterances are close to each other if they are from the same speaker, and far way if from different speakers. We build a nearest-neighbor classifier, which assigns an input signal the speaker label of the reference signal whose d-vector is closest to that of the input, measured using Euclidean distance. To prevent noise from affecting d-vector quality, we only evaluate synthesized samples conditioned on z r from clean utterances. TAB3 shows that the synthesized samples closely resemble the speaker characteristics of their corresponding reference samples, regardless of z r used for conditioning. The indicate that speaker identity is controlled by z s, while being invariant to change in z r. To quantify fidelity, we rely on crowd-sourced mean opinion scores (MOS), which rates the naturalness of the synthesized samples by natives speakers using headphones, with scores ranging from 1 to 5 in 0.5 increments. To quantify fidelity, we rely on crowd-sourced mean opinion scores (MOS), which rates the naturalness of the synthesized samples by natives speakers using headphones, with scores ranging from 1 to 5 in 0.5 increments. Results shown in TAB4 compares the baseline and the proposed model conditioning on z r from clean utterances. When conditioning on z r from clean utterances, the proposed model achieves a higher MOS score than the baseline. In contrast, the MOS drops significantly when conditioning on z r inferred from noisy utterances. The indicate that disentangling speaker and noise improves the naturalness of the generated speech, and the proposed model can synthesize more natural speech with less noise than the baseline when conditioning on z r inferred from clean signals. Finally, we study the sensitivity of disentanglement performance with respect to the choice of speaker encoding dimensions. As shown in the previous two sections, good latent space disentanglement translates to good performance in terms of control of speaker identity and acoustic conditions for synthesis. In this section, we only evaluate latent space disentanglement when changing the dimension of z s TAB5 compares performance of the proposed model when the dimensionality of z s is 32, 64, 128, and 256. Variants without data augmentation or adversarial training fail to disentangle in all configurations. When the dimension of z s increases, both the proposed model and "-adv" report worse separation of information, as indicated by increased noise prediction accuracy using z s. Specifically, the "-adv" fails to encode noise information in z r when z s has 128 dimensions, which could from a bad initialization of model parameters; however, such a behavior also indicates that when adversarial training is not applied, the disentanglement performance may rely heavily on the model initialization. On the other hand, the proposed model is least sensitive to the change of z s dimensionality. It always achieves the highest noise prediction accuracy using z r, and the lowest noise prediction accuracy using z s. We build a neural network TTS model which incorporates conditional generative modeling, data augmentation, and adversarial training to learn disentangled representations of correlated and partially unlabeled attributes, which can be used to independently control different aspects of the synthesized speech. Extensive studies on a synthetic dataset verify the effectiveness of each element of the proposed solution, and demonstrate the robustness to the choice of hyperparameters. The proposed methods for disentangling correlated attributes is general, and can potentially be applied to other pairs of correlated factors, such as reverberation and speaker, or to other modalities, such as controllable text-to-image generation. In addition, for future work, we would also like to investigate the capability of the proposed method to disentangle pairs of attributes which are both unsupervised.6 Acknowledgement | Data augmentation and adversarial training are very effective for disentangling correlated speaker and noise, enabling independent control of each attribute for text-to-speech synthesis. | 1,293 | scitldr |
LSTM-based language models exhibit compositionality in their representations, but how this behavior emerges over the course of training has not been explored. Analyzing synthetic data experiments with contextual decomposition, we find that LSTMs learn long-range dependencies compositionally by building them from shorter constituents during training. Consider the process of backpropagation through time for a language model. As an example, the language model should learn that an occurrence of "either" increases the later likelihood of "or". To do so, it must backpropagate information from the occurrence of "or" through some intervening constituent, which we will refer to as a conduit because the association of either/or is carried through it to affect the representation of "either". Perhaps it encounters a training example that uses a conduit that is predictable by being structured in familiar ways, here italicized: "Either Socrates is mortal or not all men are mortal." However, what if the conduit is unpredictable and the structure cannot be interpreted by the model, for example, if the conduit includes unknown tokens, as in: "Either slithy toves gyre or mome raths outgrabe"? Which conduit will carry the gradient from "or" to "either" easily?Formally, as the gradient of the error e t at timestep t is backpropagated k timesteps through the hidden state h: DISPLAYFORM0 The backpropagated message is multiplied repeatedly by the gradients associated with each item in the conduit. If the recurrence derivatives ∂h i+1 ∂h i are large at some parameter, the correspondingly larger backpropagated gradient ∂et ∂h t−k will accelerate descent in that direction. When we ask which conduit will carry the gradient message to learn a long-range dependency faster, the answer will depend on the magnitude and distribution of the recurrence gradients. If the language model relies on linguistic structure in the conduit in order to pass the message effectively, then the more predictable conduit will facilitate learning a long-range pattern. In order to investigate whether long-range dependencies are built from short constituents, we train models on synthetic data which varies the predictability of short sequences. We find that memorizing local patterns allows a language model to learn a long-range dependency faster but ultimately inhibits its ability to fully acquire longrange rules. How do neural language models learn? The key to answering this question is to understand the compositionality of LSTM training. To this end, we connect the hierarchical structure of language model representations with the incremental nature of neural learning dynamics. We have extensive evidence that hierarchical structure is integral to the high performance of fully trained neural language models. LSTMs learn more effectively from natural language data than from similarly distributed random data, implying that they take advantage of linguistic structure. The representations they produce seem to be hierarchical in nature BID5 BID3 BID6. They implicitly exhibit a number of compositionality assumptions linguists tend to make by encoding information about part of speech BID1, morphology BID18, and verb agreement BID7. But the compositional nature of these representations tells us little about the process by which they are learned. Humans learn by memorizing short rote phrases and later mastering the ability to construct deep syntactic trees from them BID8. LSTM models, meanwhile, learn by backpropagation through time, leading to different inductive priors compared to a human. We may not therefore expect an LSTM to exhibit similarly compositional learning behavior. However, language models are known to encode hierarchical syntax, so we must consider whether they learn hierarchically as well, by building longer constituents out of shorter ones during training. Recognizing the role of inductive priors in training is critical. LSTMs have the theoretical capacity to encode a wide range of context-sensitive languages, but in practice their ability to learn such rules from data is limited, implicating the importance of the training process BID19. However, we may find that the hierarchical nature of the representation is entirely a of the data, rather than induced by the biases of the training process. LSTMs by default learn associations from the most recent items in a sequence, but they are still capable of learning to encode grammatical inflection from the first word in a sequence rather than the most recent BID14. Inductive priors play a critical role in the ability of an LSTM to learn effectively, but they are neither necessary nor sufficient in determining what the model can learn. We therefore investigate further into LSTM learning dynamics. In general, work in deep learning has supported the assumption that easy examples are learned before hard examples BID0. A controversial proposal by BID17 held that learning begins with a memorization phase followed by a compression phase which makes the model more general, a claim that has been extensively debated with evidence for BID12 and against BID16 it. If the hypothesis holds generally, the transition from memorized to compressed rules is another example of, or potential explanation for, easy-first learning. In the case of an LSTM, dependency range is one aspect of difficulty that might affect the order of learning. For example, an either/or matching over a short distance can be memorized, but over a long distance requires an encoding of concepts like constituency in order to be applied generally. The learning dynamics of an LSTM cause lower layers to converge faster than higher layers when there are many layers BID13, which combined with findings of hierarchy BID3 BID2 imply that the local connections encoded by lower layers are learned before the more distant connections encoded by higher layers. Even within a single layer, BID15 found that local properties, such as syntactic tags, were learned earlier than the less local property of topic. The transition from local dependencies to more global dependencies is yet another example of how simple patterns are required before complex ones. However, even if simple local rules are learned first, they might not be used compositionally in constructing longer rules. In fact, simple rules learned early on might inhibit the learning of more complex rules through the phenomenon of gradient starvation BID4, in which more frequent features reduce the gradient directed at rarer features. Simple local rules could slow down the training process by affecting the recurrence from timestep to timestep to degrade the gradient, or by trapping the model in a local minimum which makes the long-distance rule harder to reach. The compositional view of training is not a given and must be verified. All experiments use a one layer LSTM, with inputs taken from an embedding layer and outputs processed by a softmax layer. All hidden dimensions are 200. We train with a learning rate set at 1 throughout and gradients clipped at 0.25. We found momentum and weight decay to slow rule learning in this setting, so they are not used. In our running example, we need to determine when our language model has learned that "either" implies an appearance of "or" later in the sequence. It is difficult to directly measure the influence of "either" on the later occurrence of "or", so in order to dissect the sequence and understand the impact of individual elements in the sequence, we employ contextual decomposition (CD).First introduced by BID11, CD is a method of looking at the individual influences of words and phrases in a sequence on the output of a recurrent model. CD converts the output vector from an LSTM layer into a sum of relevant (contributed only by the input word or phrase of interest x γ ; represented as v γ) and irrelevant (contributed by, or involving interactions with, other words in the sequence; represented as v β) parts, v = v γ + v β. Because the individual contributions of the items in a sequence interact in nonlinear ways, it is difficult to disentangle the impact of a specific word or phrase on the label distribution predicted. However, the dynamics of LSTMs are approximately linear in natural settings, as found by BID10, who used canonical correlation analysis to find close linear projections between the activations at each timestep in a repeating sequence and the activations at the end of the sequence. It is therefore unsurprising that approximation error is low for the CD approach of linearizing the output of a LSTM layer so it can be viewed as the sum of a relevant component and the contributions of the rest of the sequence. While BID11 were primarily interested in analyzing the importance and classification tendencies of the phrases and words that formed a sequence, we are interested in understanding whether a dependency between two words has been learned at all. Because the decomposed logits can be used as inputs for a softmax, we convert the decomposed elements into probability distributions by P (x|x γ) = softmax(v γ). This allows us to analyze the effect of x γ on a later element x while controlling for the influence of the rest of the sequence. We consider the running either-or example dependency to have been effectively learned when the contribution of the opening token ('either') places a high probability on its mate ('or') at the appropriate timestep when'or' occurs in the data. We use synthetic data to test the ability of an LSTM to learn a consistent rule with a longdistance dependency. This controls for the irregularity of natural language as well as for the confounding factor of rule frequency. While LSTMs in natural language model settings learn shortrange dependencies first, we must consider the possibility that this pattern is unrelated to any inductive prior. It could be that longer-range dependencies are simply rarer and therefore learned later. Our synthetic data sets instead have a fixed number of occurrences of the long distance relationship, regardless of the conduit length. We generate data uniformly at random from a vocabulary Σ. However, we insert n instances of the long-distance rule αΣ k ω (with conduit length k), where we consider an open symbol α and a close symbol ω, with α, ω ∈ Σ. Relating to our running example, α stands for "either" and ω stands for "or". We use a corpus of 1m tokens with |Σ| = 1k types, which leaves a low probability that any conduit sequence longer than 1 token appears elsewhere by chance. For all analyses, CD yielded an approximation error Limitations While we hope to isolate the role of long range dependencies through synthetic data, we must consider the possibility that the natural predictability of language data differs in relevant ways from the synthetic data, in which the conduits are predictable only through pure memorization. Because LSTM models take advantage of linguistic structure, we cannot be confident that predictable natural language exhibits the same cell state dynamics that make a memorized uniformly sampled conduit promote or inhibit longrange rule learning. First, we investigate how the frequency of a rule affects the ability of the model to learn the rule. We vary the conduit length k while keeping n constant. The in FIG1 illustrate how a longer conduit length requires more examples before the model can learn the corresponding rule. We consider the probability assigned to the close symbol according to the contributions of the open symbol, excluding interaction from any other token in the sequence. For contrast, we also show the extremely low probability assigned to the close symbol according to the contributions of the conduit taken as an entire phrase. In particular, note the pattern when the rule is extremely rare: The probability of the close symbol as determined by the open symbol is low but steady regardless of the conduit length, while the probability as determined by the conduit declines with conduit length due to the accumulated low probabilities from each element in the sequence. To understand the impact of the open symbol in context, see FIG2, which illustrates that the conduit interacts with the open symbol to increase the probability slightly, a sign that the model is counting the intervening symbols rather than registering only the effect of the open symbol. To understand the effect of conduit predictability, we modify the synthetic data such that the sequence in the conduit appears frequently outside of the long-distance rule. In this experiment, the conduits are actually taken from a randomly generated vocabulary of 100, so that each unique conduit q appears in the training corpus 10 times in the context αqω. This repetition is necessary in order to fit n = 1000 occurrences of the rule in all settings. In the unpredictable-conduit setting, q appears only in this context as a conduit, so the conduit remains random and unpredictable. In the predictable-conduit setting, we randomly distribute m = 1000 occurrences of each conduit q throughout the corpus outside of the rule patterns. In the predictable-conduit setting, each con- duit is seen often enough to be memorized. As we see in FIG3, copying each conduit 100 times throughout the corpus inhibits learning of the symbol-matching rule over the long run of training, but promotes early learning of the rule. This implies that long-range dependencies are learned from the structure of their constituents. Therefore the model is delayed during training in representing longer dependencies in part because it depends on constituents being effectively learned first. We confirm that the longer the span of a rule, the more examples are required for an LSTM model to effectively learn the rule. We then find1 that a more predictable conduit between the rule symbols promotes the early learning of the rule, implying that the process by which an LSTM learns long-range rules is compositional. However, the representation learned through the predictable conduit ultimately prevents the model from confidently learning these long-range connections. | LSTMs learn long-range dependencies compositionally by building them from shorter constituents over the course of training. | 1,294 | scitldr |
Learning a deep neural network requires solving a challenging optimization problem: it is a high-dimensional, non-convex and non-smooth minimization problem with a large number of terms. The current practice in neural network optimization is to rely on the stochastic gradient descent (SGD) algorithm or its adaptive variants. However, SGD requires a hand-designed schedule for the learning rate. In addition, its adaptive variants tend to produce solutions that generalize less well on unseen data than SGD with a hand-designed schedule. We present an optimization method that offers empirically the best of both worlds: our algorithm yields good generalization performance while requiring only one hyper-parameter. Our approach is based on a composite proximal framework, which exploits the compositional nature of deep neural networks and can leverage powerful convex optimization algorithms by design. Specifically, we employ the Frank-Wolfe (FW) algorithm for SVM, which computes an optimal step-size in closed-form at each time-step. We further show that the descent direction is given by a simple backward pass in the network, yielding the same computational cost per iteration as SGD. We present experiments on the CIFAR and SNLI data sets, where we demonstrate the significant superiority of our method over Adam, Adagrad, as well as the recently proposed BPGrad and AMSGrad. Furthermore, we compare our algorithm to SGD with a hand-designed learning rate schedule, and show that it provides similar generalization while often converging faster. The code is publicly available at https://github.com/oval-group/dfw. Since the introduction of back-propagation BID23, stochastic gradient descent (SGD) has been the most commonly used optimization algorithm for deep neural networks. While yielding remarkable performance on a variety of learning tasks, a downside of the SGD algorithm is that it requires a schedule for the decay of its learning rate. In the convex setting, curvature properties of the objective function can be used to design schedules that are hyper-parameter free and guaranteed to converge to the optimal solution . However, there is no analogous of practical interest for the non-convex optimization problem of a deep neural network. An illustration of this issue is the diversity of learning rate schedules used to train deep convolutional networks with SGD: BID25 and adapt the learning rate according to the validation performance, while BID27, BID3 and BID8 use pre-determined schedules, which are respectively piecewise constant, geometrically decaying, and cyclic with a cosine annealing. While these protocols in competitive or state-of-the-art on their learning task, there does not seem to be a consistent methodology. As a , finding such a schedule for a new setting is a time-consuming and computationally expensive effort. To alleviate this issue, adaptive gradient methods have been developed BID36 BID4 BID21, and borrowed from online convex optimization . Typically, these methods only require the tuning of the initial learning rate, the other hyper-parameters being considered robust across applications. However, it has been shown that such adaptive gradient methods obtain worse generalization than SGD BID32. This observation is corroborated by our experimental . In order to bridge this performance gap between existing adaptive methods and SGD, we introduce a new optimization algorithm, called Deep Frank-Wolfe (DFW). The DFW algorithm exploits the composite structure of deep neural networks to design an optimization algorithm that leverages efficient convex solvers. In more detail, we consider a composite (nested) optimization problem, with the loss as the outer function and the function encoded by the neural network as the inner one. At each iteration, we define a proximal problem with a first-order approximation of the neural network (linearized inner function), while keeping the loss function in its exact form (exact outer function). When the loss is the hinge loss, each proximal problem created by our formulation is exactly a linear SVM. This allows us to employ the powerful Frank-Wolfe (FW) algorithm as the workhorse of our procedure. There are two by-design advantages to our method compared to the SGD algorithm. First, each iteration exploits more information about the learning objective, while preserving the same computational cost as SGD. Second, an optimal step-size is computed in closed-form by using the FW algorithm in the dual (BID5 . Consequently, we do not need a hand-designed schedule for the learning rate. As a , our algorithm is the first to provide competitive generalization error compared to SGD, all the while requiring a single hyper-parameter and often converging significantly faster. We present two additional improvements to customize the use of the DFW algorithm to deep neural networks. First, we show how to smooth the loss function to avoid optimization difficulties arising from learning deep models with SVMs . Second, we incorporate Nesterov momentum to accelerate our algorithm. We demonstrate the efficacy of our method on image classification with the CIFAR data sets using two architectures: wide residual networks BID35 and densely connected convolutional neural networks BID3; we also provide experiments on natural language inference with a Bi-LSTM on the SNLI corpus . We show that the DFW algorithm often strongly outperforms previous methods based on adaptive learning rates. Furthermore, it provides comparable or better accuracy to SGD with hand-designed learning rate schedules. In , our contributions can be summed up as follows:• We propose a proximal framework which preserves information from the loss function.• For the first time for deep neural networks, we demonstrate how our formulation gives at each iteration (i) an optimal step-size in closed form and (ii) an update at the same computational cost as SGD.• We design a novel smoothing scheme for the dual optimization of SVMs.• To the best of our knowledge, the ing DFW algorithm is the first to offer comparable or better generalization to SGD with a hand-designed schedule on the CIFAR data sets, all the while converging several times faster and requiring only a single hyperparameter. Non Gradient-Based Methods. The success of a simple first-order method such as SGD has led to research in other more sophisticated techniques based on relaxations (BID37, learning theory , Bregman iterations BID29, and even second-order methods BID22 BID10 BID18, BID9, , , BID11. While such methods hold a lot of promise, their relatively large per-iteration cost limits their scalability in practice. As a , gradient-based methods continue to be the most popular optimization algorithms for learning deep neural networks. Adaptive Gradient Methods. As mentioned earlier, one of the main challenges of using SGD is the design of a learning rate schedule. Several works proposed alternative first-order methods that do not require such a schedule, by either modifying the descent direction or adaptively rescaling the step-size (BID36 BID24 BID4 BID38 BID21 . However, as noted above, the adaptive variants of SGD sometimes provide subpar generalization BID32 .Learning to Learn and Meta-Learning. Learning to learn approaches have also been proposed to optimize deep neural networks. and BID33 learn the learning rate to avoid a hand-designed schedule and to improve practical performance. Such methods can be combined with our proposed algorithm to learn its proximal coefficient, instead of considering it as a fixed hyper-parameter to be tuned. Meta-learning approaches have also been suggested to learn the optimization algorithm BID0 BID20 BID31 BID7 . This line of work, which is orthogonal to ours, could benefit from the use of DFW to optimize the meta-learner. Optimization and Generalization. Several works study the relationship between optimization and generalization in deep learning. In order to promote generalization within the optimization algorithm itself, BID15 proposed the Path-SGD algorithm, which implicitly controls the capacity of the model. However, their method required the model to employ ReLU non-linearity only, which is an important restriction for practical purposes. , , BID17, BID2 and analyzed how existing optimization algorithms implicitly regularize deep neural networks. However this phenomenon is not yet fully understood, and the ing empirical recommendations are sometimes opposing ( BID2 .Proximal Methods. The back-propagation algorithm has been analyzed in a proximal framework in . Yet, the ing approach still requires the same hyper-parameters as SGD and incurs a higher computational cost per iteration. Linear SVM Sub-Problems. A main component of our formulation is to formulate sub-problems as linear SVMs. In an earlier work , we showed that neural networks with piecewise linear activations could be trained with the CCCP algorithm BID34, which yielded approximate SVM problems to be solved with the BCFW algorithm BID5. However this algorithm only updates the parameters of one layer at a time, which slows down convergence significantly in practice. Closest to our approach are the works of BID1 and BID26. BID1 suggested to create a local SVM based on a first-order Taylor expansion and a proximal term, in order to lower the error of every data sample while minimizing the changes in the weights. However their method operated in a non-stochastic setting, making the approach infeasible for large-scale data sets. BID26, a parallel work to ours, also created an SVM problem using a first-order Taylor expansion, this time in a mini-batch setting. Their work provided interesting insights from a statistical learning theory perspective. While their method is well-grounded, its significantly higher cost per iteration impairs its practical speed and scalability. As such, it can be seen as complementary to our empirical work, which exploits a powerful solver and provides state-of-the-art scalability and performance. Before describing our formulation, we introduce some necessary notation. We use · to denote the Euclidean norm. Given a function φ, ∂φ(u) û is the derivative of φ with respect to u evaluated atû. According to the situation, this derivative can be a gradient, a Jacobian or even a directional derivative. Its exact nature will be clear from context throughout the paper. We also introduce the first-order Taylor expansion of φ around the pointû: Tûφ(u) = φ(û) + (∂φ(u) û ) (u −û). For a positive integer p, we denote the set {1, 2, ..., p} as [p]. For simplicity, we assume that stochastic algorithms process only one sample at each iteration, although the methods can be trivially extended to mini-batches of size larger than one. We suppose we are given a data set (x i, y i) i∈ [N], where each DISPLAYFORM0 is a sample annotated with a label y i from the output space Y. The data set is used to estimate a parameterized model represented by the function f. Given its (flattened) parameters w ∈ R p, and an input DISPLAYFORM1, a vector with one score per element of the output space Y. For instance, f can be a linear map or a deep neural network. Given a vector of scores per label s ∈ R |Y|, we denote by L(s, y i) the loss function that computes the risk of the prediction scores s given the ground truth label y i. For example, the loss L can be cross-entropy or the multi-class hinge loss: DISPLAYFORM2 DISPLAYFORM3 The cross-entropy loss tries to match the empirical distribution by driving incorrect scores as far as possible from the ground truth one. The hinge loss attempts to create a minimal margin of one between correct and incorrect scores. The hinge loss has been shown to be more robust to over-fitting than cross-entropy, when combined with smoothing techniques that are common in the optimization literature . To simplify notation, we introduce DISPLAYFORM4. Finally, we denote by ρ(w) the regularization (typically the squared Euclidean norm). We now write the learning problem under its empirical risk minimization form: DISPLAYFORM5 3.2 A PROXIMAL APPROACH Our main contribution is a formulation which exploits the composite nature of deep neural networks in order to obtain a better approximation of the objective at each iteration. Thanks to the careful approximation design, this approach yields sub-problems that are amenable to efficient optimization by powerful convex solvers. In order to understand the intuition of our approach, we first present a proximal gradient perspective on SGD.The SGD Algorithm. At iteration t, the SGD algorithm selects a sample j at random and observes the objective estimate ρ(w t) + L j (f j (w t)). Then, given the learning rate η t, it performs the following update on the parameters: DISPLAYFORM6 Equation FORMULA6 is the closed-form solution of a proximal problem where the objective has been linearized by the first-order Taylor expansion T wt : DISPLAYFORM7 To see the relationship between and FORMULA7, one can set the gradient with respect to w to 0 in equation FORMULA7, and observe that the ing equation is exactly. In other words, SGD minimizes a first-order approximation of the objective, while encouraging proximity to the current estimate w t.However, one can also choose to linearize only a part of the composite objective BID6. Choosing which part to approximate is a crucial decision, because it yields optimization problems with widely different properties. In this work, we suggest an approach that lends itself to fast optimization with robust convex solvers and preserves information about the learning task by keeping an exact loss function. Loss-Preserving Linearization. In detail, at iteration t, with selected sample j, we introduce the proximal problem that linearizes the regularization ρ and the model f j, but not the loss function L: Figure 1: We illustrate the different approximations on a synthetic composite objective function Φ(w) = L(f (w)) (Φ is plotted in black). In this example, L is a maximum of linear functions (similarly to a hinge loss) and f is a non-linear smooth map. We denote the current iterate by w t, and the point minimizing Φ by w *. On the left-hand side, one can observe how the SGD approximation is a single line (tangent at Φ(w t), in blue), while the LPL approximation is piecewise linear (in orange), and thus matches the objective curve (in black) more closely. On the right-hand side, an identical proximal term is added to both approximations to visualize equations FORMULA7 and FORMULA8. Thanks to the better accuracy of the LPL approximation, the iterate w LPL t+1 gets closer to the solution w * than w SGD t+1. This effect is particularly true when the proximal coefficient 1 2ηt is small, or equivalently, when the learning rate η t is large. Indeed, the accuracy of the local approximation becomes more important when the proximal term is contributing less (e.g. when η t is large). DISPLAYFORM8 In figure 1, we provide a visual comparison of equations FORMULA7 and FORMULA8 in the case of a piecewise linear loss. As will be seen, by preserving the loss function, we will be able to achieve good performance across a number of tasks with a fixed η t = η. Consequently, we will provide the first algorithm to accurately learn deep neural networks with only a single hyper-parameter while offering similar performance compared to SGD with a hand-designed schedule. We focus on the optimization of equation FORMULA8 when L is a multi-class hinge loss. The of this section were originally derived for linear models BID5. Our contribution is to show for the first time how they can be exploited for deep neural networks thanks to our formulation. We will refer to the ing algorithm for neural networks as Deep Frank-Wolfe (DFW). We begin by stating the key advantage of our method. Proposition 1 (Optimal step-size, BID5). Problem with a hinge loss is amenable to optimization with Frank-Wolfe in the dual, which yields an optimal step-size γ t ∈ in closed-form at each iteration t. This optimal step-size can be obtained in closed-form because the hinge loss is convex and piecewise linear. In fact, the approach presented here can be applied to any loss function L that is convex and piecewise linear (another example would be the l 1 distance for regression for instance).Since the step-size can be computed in closed-form, the main computational challenge is to obtain the update direction, that is, the conditional gradient of the dual. In the following , we show that by taking a single step per proximal problem, this dual conditional gradient can be computed at the same cost as a standard stochastic gradient. The proof is available in appendix A.5. If a single step is performed on the dual of, its conditional gradient is given by −∂ (ρ(w) + L y (f x (w))) wt. Given the step-size γ t, the ing update can be written as: DISPLAYFORM0 In other words, the cost per iteration of the DFW algorithm is the same as SGD, since the update only requires standard stochastic gradients. In addition, we point out that in a mini-batch setting, the conditional gradient is given by the average of the gradients over the mini-batch. As a consequence, we can use batch Frank-Wolfe in the dual rather than coordinate-wise updates, with the same parallelism as SGD over the samples of a mini-batch. One can observe how the update exploits the optimal step-size γ t ∈ given by Proposition 1. There is a geometric interpretation to the role of this step-size γ t. When γ t is set to its minimal value 0, the ing iterate does not move along the direction ∂L j (f j (w)) wt. Since the step-size is optimal, this can only happen if the current iterate is detected to be at a minimum of the piecewise linear approximation. Conversely, when γ t reaches its maximal value 1, the algorithm tries to move as far as possible along the direction ∂L j (f j (w)) wt. In that case, the update is the same as the one obtained by SGD (as given by equation FORMULA6). In other words, γ t can automatically decay the effective learning rate, hereby preventing the need to design a learning rate schedule by hand. As mentioned previously, the DFW algorithm performs only one step per proximal problem. Since problem FORMULA8 is only an approximation of the original problem, it may be unnecessarily expensive to solve it very accurately. Therefore taking a single step per proximal problem may help the DFW algorithm to converge faster. This is confirmed by our experimental , which show that DFW is often able to minimize the learning objective at greater speed than SGD. We present two improvements to customize the application of our algorithm to deep neural networks. Smoothing. The SVM loss is non-smooth and has sparse derivatives, which can cause difficulties when training a deep neural network . In Appendix A.6, we derive a novel that shows how we can exploit the smooth primal cross-entropy direction and inexpensively detect when to switch back to using the standard conditional gradient. Nesterov Momentum. To take advantage of acceleration similarly to the SGD baseline, we adapt the Nesterov momentum to the DFW algorithm. We defer the details to the appendix in A.7 for space reasons. We further note that the momentum coefficient µ is typically set to a high value, say 0.9, and does not contribute significantly to the computational cost of cross-validation. The main steps of DFW are shown in Algorithm 1. As the key feature of our approach, note that the step-size is computed in closed-form in step 10 of the algorithm (colored in blue).Note that only the hyper-parameter η will be tuned in our experiments: we will use the same batch-size, momentum and number of epochs as the baselines in our experiments (unless specified otherwise). In addition, we point out again that when γ t = 1, we recover the SGD step with Nesterov momentum. In sections A.5 and A.6 of the appendix, we detail the derivation of the optimal step-size (step 10) and the computation of the search direction (step 7). The computation of the dual search direction is omitted here for space reasons. However, its implementation is straightforward in practice, and its computational cost is linear in the size of the output space. Finally, we emphasize that the DFW algorithm is motivated by an empirical perspective. While our method is not guaranteed to converge, our experiments show an effective minimization of the learning objective for the problems encountered in practice. We compare the Deep Frank Wolfe (DFW) algorithm to the state-of-the-art optimizers. We show that, across diverse data sets and architectures, the DFW algorithm outperforms adaptive gradient methods (with the exception of one setting, DN-10, where it obtains similar performance to AMSGrad and BPGrad). In addition, the DFW algorithm offers competitive and sometimes superior performance to Receive data of mini-batch (x i, y i) i∈B 6: DISPLAYFORM0 ∀i ∈ B, s DISPLAYFORM1 Dual direction (details in Appendix A.6) 8: DISPLAYFORM2 Derivative of (smoothed) loss function 9:r t = ∂ρ(w) wt Derivative of regularization 10: DISPLAYFORM3 Step-size 11: DISPLAYFORM4 12: DISPLAYFORM5 Parameters update 13: DISPLAYFORM6 end for 15: end for SGD at a lower computational cost, even though SGD has the advantage of a hand-designed schedule that has been chosen separately for each of these tasks. Our experiments are implemented in pytorch BID19, and the code is available at https://github.com/oval-group/dfw. All models are trained on a single Nvidia Titan Xp card. Data Set & Architectures. The CIFAR-10/100 data sets contain 60,000 RGB natural images of size 32 × 32 with 10/100 classes . We split the training set into 45,000 training samples and 5,000 validation samples, and use 10,000 samples for testing. The images are centered and normalized per channel. Unless specified otherwise, no data augmentation is employed. We perform our experiments on two modern architectures of deep convolutional neural networks: wide residual networks BID35, and densely connected convolutional networks BID3. Specifically, we employ a wide residual network of depth 40 and width factor 4, which has 8.9M parameters, and a "bottleneck" densely connected convolutional neural network of depth 40 and growth factor 40, which has 1.9M parameters. We refer to these architectures as WRN and DN respectively. All the following experimental details follow the protocol of BID35 and BID3. The only difference is that, instead of using 50,000 samples for training, we use 45,000 samples for training, and 5,000 samples for the validation set, which we found to be essential for all adaptive methods. While Deep Frank Wolfe (DFW) uses an SVM loss, the baselines are trained with the Cross-Entropy (CE) loss since this ed in better performance. Method. We compare DFW to the most common adaptive learning rates currently used: Adagrad , Adam (, the corrected version of Adam called AMSGrad BID21, and BPGrad BID38 . For these methods and for DFW, we cross-validate the initial learning rate as a power of 10. We also evaluate the performance of SGD with momentum (simply referred to as SGD), for which we follow the protocol of BID35 and BID3. For all methods, we set a budget of 200 epochs for WRN and 300 epochs for DN. Furthermore, the batch-size is respectively set to 128 and 64 for WRN and DN as in BID35 and BID3. For DN, the l 2 regularization is set to 10 −4 as in BID3. For WRN, the l 2 is cross-validated between 5.10 DISPLAYFORM0, as in BID35, and 10 −4, a more usual value that we have found to perform better for some of the methods (in particular DFW, since the corresponding loss function is an SVM instead of CE, for which the value of 5.10 DISPLAYFORM1 was designed). The value of the Nesterov momentum is set to 0.9 for BPGrad, SGD and DFW. DFW has only one hyper-parameter to tune, namely η, which is analogous to an initial learning rate. For SGD, the initial learning rate is set to 0.1 on both WRN and DN. Following BID35 and BID3, it is then divided by 5 at epochs 60, 120 and 180 for WRN, and by 10 at epochs 150 and 225 for DN.Results. We present the in Table 1 Observe that DFW significantly outperforms the adaptive gradient methods, particularly on the more challenging CIFAR-100 data set. On the WRN-CIFAR-100 task in particular, DFW obtains a testing accuracy which is about 7% higher than all other adaptive methods and outperforms SGD with a hand-designed schedule by 1%. The inferior generalization of adaptive gradient methods is consistent with the findings of BID32. On all tasks, the accuracy of DFW is comparable to SGD. Note that DFW converges significantly faster than SGD: the network reaches its final performance several times faster than SGD in all cases. We illustrate this with an example in figure 2, which plots the training and validation errors on DN-CIFAR-100. In figure 3, one can see how the step-size is automatically decayed by DFW on this same experiment: we compare the effective step-size γ t η for DFW to the manually tuned η t for SGD. Step-Size SGD DFW Figure 3: The (automatic) evolution of γ t η for the DFW algorithm compared to the "staircase" hand-designed schedule of η t for SGD.Data Augmentation. Since data augmentation provides a significant boost to the final accuracy, we provide additional that make use of it. Specifically, we randomly flip the images horizontally and randomly crop them with four pixels padding. For methods that do not use a hand-designed schedule, such data augmentation introduces additional variance which makes the adaptation of the step-size more difficult. Therefore we allow the batch size of adaptive methods (e.g. all methods but SGD) to be chosen as 1x, 2x or 4x, where x is the original value of batch-size (64 for DN, 128 for WRN). Due to the heavy computational cost of the cross-validation (we tune the batch-size, regularization and initial learning rate), we provide for SGD, DFW and the best performing adaptive gradient method, which is AMSGrad. For SGD the hyper-parameters are kept the same as in BID35 and BID3. We present the in TAB4 TAB9 of BID3. The small difference between the of SGD and SGD * can be explained by the fact that we use 5,000 fewer training samples in our experiments (these are kept for validation). The of this table show that DFW systematically outperforms AMSGrad on this task (by up to 7% on WRN-100).These confirm that DFW consistently outperforms AMSGrad, which is the best adaptive baseline on these tasks. In particular, DFW obtains a test accuracy which is 7% better than AMSGrad on WRN-100. Data Set. The Stanford Natural Language Inference (SNLI) data set is a large corpus of 570k pairs of sentences . Each sentence is labeled by one of the three possible labels: entailment, neutral and contradiction. This allows the model to learn the semantics of the text data from a three-way classification problem. Thanks to its scale and its supervised labels, this data set allows large neural networks to learn high-quality text embeddings. demonstrate, the SNLI corpus can thus be used as a basis for transfer learning in natural language processing, in the same way that the ImageNet data set is used for pre-training in computer vision. Method. We follow the protocol of to learn their best model, namely a bi-directional LSTM of about 47M parameters. In particular, the reported use SGD with an initial learning rate of 0.1 and a hand-designed schedule that adapts to the variations of the validation set: if the validation accuracy does not improve, the learning rate is divided by a factor of 5. We also report on Adagrad, Adam, AMSGrad and BPGrad. Following the official SGD baseline, Nesterov momentum is deactivated. Using their open-source implementation, we replace the optimization by the DFW algorithm, the CE loss by an SVM, and leave all other components unchanged. In this experiment, we use the conditional gradient direction rather than the CE gradient, since three-way classification does not cause sparsity in the derivative of the hinge loss (which is the issue that originally motivated our use of a different direction). We cross-validate our initial proximal term as a power of ten, and do not manually tune any schedule. In order to disentangle the importance of the loss function from the optimization algorithm, we run the baselines with both an SVM loss and a CE loss. The initial learning rate of the baselines is also cross-validated as a power of ten. Results. The are presented in Note that these outperform the reported testing accuracy of 84.5% in that is obtained with CE. This experiment, which is performed on a completely different architecture and data set than the previous one, confirms that DFW outperforms adaptive gradient methods and matches the performance of SGD with a hand-designed learning rate schedule.6 THE IMPORTANCE OF THE STEP-SIZE It is worth discussing the subtle relationship between optimization and generalization. In order to emphasize the impact of implicit regularization, all presented in this section do not use data augmentation. As a first illustrative example, we consider the following experiment: we take the protocol to train the DN network on CIFAR-100 with SGD, and simply change the initial learning rate to be ten times smaller, and the budget of epochs to be ten times larger. As a , the final training objective significantly decreases from 0.33 to 0.069. Yet at the same time, the best validation accuracy decreases from 70.94% to 68.7%. A similar effect occurs when decreasing the value of the momentum, and we have observed this across various convolutional architectures. In other words, accurate optimization is less important for generalization than the implicit regularization of a high learning rate. We have observed DFW to accurately optimize the learning objective in our experiments. However, given the above observation, we believe that its good generalization properties are rather due to its capability to usually maintain a high learning rate at an early stage. Similarly, the good generalization performance of SGD may be due to its schedule with a large number of steps at a high learning rate. The previous section has qualitatively hinted at the importance of the step-size for generalization. Here we quantitatively analyze the impact of the initial learning rate η on both the training accuracy (quality of optimization) and the validation accuracy (quality of generalization). We compare of the DFW and SGD algorithms on the CIFAR data sets when varying the value of η as a power of 10. The on the validation set are summarized in FIG2, and the performance on the training set is reported in Appendix B.On the training set, both methods obtain nearly perfect accuracy across at least three orders of magnitude of η (details in Appendix B.4). In contrast, the of figure 4 confirm that the validation performance is sensitive to the choice of η for both methods. In some cases where η is high, SGD obtains a better performance than DFW. This is because the handdesigned schedule of SGD enforces a decay of η, while the DFW algorithm relies on an automatic decay of the step-size γ t for effective convergence. This automatic decay may not happen if a small proximal term (large η) is combined with a local approximation that is not sufficiently accurate (for instance with a small batch-size).However, if we allow the DFW algorithm to use a larger batch size, then the local approximation becomes more accurate and it can handle large values of η as well. Interestingly, choosing a larger batch-size and a larger value of η can in better generalization. For instance, by using a batchsize of 256 (instead of 64) and η = 1, DFW obtains a test accuracy of 72.64% on CIFAR-100 with the DN architecture (SGD obtains 70.33% with the settings of BID3). Our empirical evidence indicates that the initial learning rate can be a crucial hyper-parameter for good generalization. We have observed in our experiments that such a choice of high learning rate provides a consistent improvement for convolutional neural networks: accurate minimization of the training objective with large initial steps usually leads to good generalization. Furthermore, as mentioned in the previous section, it is sometimes beneficial to even increase the batch-size in order to be able to train the model using large initial steps. In the case of recurrent neural networks, however, this effect is not as distinct. Additional experiments on different recurrent architectures have showed variations in the impact of the learning rate and in the best-performing optimizer. Further analysis would be required to understand the effects at play. We have introduced DFW, an efficient algorithm to train deep neural networks. DFW predominantly outperforms adaptive gradient methods, and obtains similar performance to SGD without requiring a hand-designed learning rate schedule. We emphasize the generality of our framework in Section 3, which enables the training of deep neural networks to benefit from any advance on optimization algorithms for linear SVMs. This framework could also be applied to other loss functions that yield efficiently solvable proximal problems. In particular, our algorithm already supports the use of structured prediction loss functions BID28 BID30, which can be used, for instance, for image segmentation. We have mentioned the intricate relationship between optimization and generalization in deep learning. This illustrates a major difficulty in the design of effective optimization algorithms for deep neural networks: the learning objective does not include all the regularization needed for good generalization. We believe that in order to further advance optimization for deep neural networks, it is essential to alleviate this problem and expose a clear objective function to optimize. This work was supported by the EPSRC grants AIMS CDT EP/L015987/1, Seebibyte EP/M013774/1, EP/P020658/1 and TU/B/000048, and by Yougov. We also thank the Nvidia Corporation for the GPU donation. For completeness, we prove for our specific instance of Structural SVM problem. We point out that the proofs of sections A.1, A.2 and A.3 are adaptations from BID5. Propositions are numbered according to their appearance in the paper. In this section, we assume the loss L to be a hinge loss: DISPLAYFORM0 We suppose that we have received a sample (x, y). We simplify the notation f (w, x) = f x (w) and L(u, y) = L y (u). For simplicity of the notation, and without loss of generality, we consider the proximal problem obtained at time t = 0: DISPLAYFORM1 Let us define the classification task loss: DISPLAYFORM2 Using this notation, the multi-class hinge loss can be written as: DISPLAYFORM3 Indeed, we can successively write: DISPLAYFORM4 We are now going to re-write problem as the sum of a quadratic term and a pointwise maximum of linear functions. Forȳ ∈ Y, let us define: DISPLAYFORM5 Then we have that: DISPLAYFORM6 Therefore, problem can be written as: DISPLAYFORM7 We notice that the term ρ(w 0) in b is a constant that does not depend on w norȳ, therefore we can simplify the expression of b to: DISPLAYFORM8 We introduce the following notation: DISPLAYFORM9 DISPLAYFORM10 DISPLAYFORM11 We will also use the indicator vector: 1 y ∈ R, which is equal to 1 at index y and 0 elsewhere. Lemma 1 (Dual Objective). The Lagrangian dual of FORMULA20 is given by: DISPLAYFORM0 Given the dual variables α, the primal can be computed asŵ = −Aα. Proof. We derive the Lagrangian of the primal problem. For that, we write the problem in the following equivalent ways: DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 DISPLAYFORM4 (by strong duality).We can now write the KKT conditions of the inner minimization problem: DISPLAYFORM5 This gives α ∈ P andŵ = −Aα, since A = (ηaȳ)ȳ ∈Y by definition. By injecting these constraints in, we obtain: DISPLAYFORM6 which finally gives the desired . Lemma 2 (Optimal Step-Size). Suppose that we make a step in the direction of s ∈ P in the dual. We define the corresponding primal variables w s = −As and λ s = b s, as well as λ = b α. Then the optimal step-size is given by: DISPLAYFORM0 Proof. Given the direction s, we take the step α + γ(s − α). The new objective is given by: DISPLAYFORM1 In order to compute the optimal step-size, we compute the derivative of the above expression with respect to gamma, and set it to 0: DISPLAYFORM2 We can isolate the unique term containing γ: DISPLAYFORM3 This yields: DISPLAYFORM4 We can then inject the primal variables and simplify: DISPLAYFORM5 We present here the primal-dual algorithm that solves using the previous : DISPLAYFORM0 Initialization w 0 − Aα with α = 1 y 2: λ 1 = 0 Initialization b α with α = 1 y 3: t = 1 4: while not converged do Choose direction s t ∈ P (e.g. conditional gradient or smoothed loss) 6: DISPLAYFORM1 λ s = b s t 8: DISPLAYFORM2 Optimal-step-size 9: DISPLAYFORM3 t = t + 1 12: end while Note that when f x is linear, and when the search direction s is given by the conditional gradient, we recover the standard Frank-Wolfe algorithm for SVM BID5. We now provide some simplification to the steps 6, 8 and 9 of Algorithm 2 when a single step is taken, as is the case in the DFW algorithm. This corresponds to the iteration t = 1.Proposition 2 (Cost per iteration). If a single step is performed on the dual of, its conditional gradient is given by −∂ (ρ(w) + L y (f x (w))) wt. The ing update can be written as: DISPLAYFORM0 Proof. It is known that for linear SVMs, the direction of the dual conditional gradient is given by the negative sub-gradient of the primal BID5,. We apply this to the Taylor expansion of the network, which is the local model used for the proximal problem. Then we have that at iteration t = 1, the conditional gradient is given by: DISPLAYFORM1 It now suffices to notice that a first-order Taylor expansion does not modify the derivative at its point of linearization: for a function φ, ∂T w0 φ(w) w0 = ∂φ(w) w0. By applying this property and the chain rule to FORMULA5, we obtain that the conditional gradient is given by: DISPLAYFORM2 This completes the proof that the conditional gradient direction is given by a stochastic gradient. We now prove equation FORMULA5 in the next lemma. Lemma 3. Suppose that we apply the Proximal Frank-Wolfe algorithm with a single step. Let δ t = ∂ s t (f x,ȳ (w 0) − f x,y (w 0))ȳ ∈Y and r t = ∂ w ρ(w 0). Then we can rewrite step 6 as: DISPLAYFORM3 In addition, we can simplify steps 8 and 9 of Algorithm 2 to: DISPLAYFORM4 DISPLAYFORM5 Proof. Again, since we perform a single step of FW, we assume t = 1. To prove equation FORMULA5, we note that: DISPLAYFORM6 We point out the two following : DISPLAYFORM7 and: w t − w 0 − w s = −ηr t + ηr t + ηδ t = ηδ t. Since λ 1 = 0 by definition, equation FORMULA5 is obtained with a simple application of equations 40 and 41. Finally, we prove equation 38 by writing: DISPLAYFORM8 A.6 SMOOTHING THE LOSS As pointed out in the paper, the SVM loss is non-smooth and has sparse derivatives, which can prevent the effective training of deep neural networks . Partial linearization can solve this problem by locally smoothing the dual BID12. However, this would introduce a temperature hyper-parameter which is undesirable. Therefore, we note that DFW can be applied with any direction that is feasible in the dual, since it computes an optimal step-size. In particular, the following states that we can use the well-conditioned and non-sparse gradient of cross-entropy. Proposition 3. The gradient of cross-entropy in the primal gives a feasible direction in the dual. Furthermore, we can inexpensively detect when this feasible direction cannot provide any improvement in the dual, and automatically switch to the conditional gradient when that is the case. For simplicity, we divide Proposition 3 into two distinct parts: first we show how the CE gradient gives a feasible direction in the dual, and then how it can be detected to be an ascent direction. Lemma 4. The gradient of cross-entropy in the primal gives a feasible direction in the dual. In other words, the gradient of cross-entropy g in the primal is such that there exists a dual search direction s ∈ P verifying g = −As. Proof. We consider the vector of scores (f x,ȳ (w))ȳ ∈Y ∈ R. We compute its softmax: DISPLAYFORM0 j∈Y exp(fx,j (w)) ȳ∈Y. Clearly, s ce ∈ P by property of the softmax. Furthermore, by going back to the definition of A, one can easily verify that −As ce is exactly the primal gradient given by a backward pass through the cross-entropy loss instead of the hinge loss. This concludes the proof. The previous lemma has shown that we can use the gradient of cross-entropy as a feasible direction s ce in the dual. The next step is to make it a dual ascent direction, that is a direction which always permits improvement on the dual objective (unless at the optimal point). In what follows, we show that we can inexpensively (approximately) compute a sufficient condition for s ce to be an ascent direction. If the condition is not satisfied, then we can automatically switch to use the subgradient of the hinge loss (which is known as an ascent direction in the dual).Lemma 5. Let s ∈ P be a feasible direction in the dual, and v = (T w0 f x (w t)ȳ + ∆(ȳ, y) − T w0 f x (w t) y )ȳ ∈Y ∈ R |Y| be the vector of augmented scores output by the linearized model. Let us assume that we apply the single-step Proximal Frank-Wolfe algorithm (that is, we have t = 1), and that ρ is a non-negative function. Then s v > 0 is a sufficient condition for s to be an ascent direction in the dual. Proof. Let s ∈ P, v = (T w0 f x (w t)ȳ + ∆(ȳ, y) − T w0 f x (w t) y )ȳ ∈Y. By definition, we have that: DISPLAYFORM1 Therefore: DISPLAYFORM2 ⇐⇒ (As) (w t − w 0) + ηs b − ηT w0 ρ(w) > 0, (since s ∈ P and η > 0) DISPLAYFORM3 We have just shown that if s v > 0, then γ t > 0. Since γ t is an optimal step-size, this indicates that s is an ascent direction (we would obtain γ t = 0 for a direction s that cannot provide improvement).Approximate Condition. In practice, we consider that T w0 f x (w t) f x (w 0). Indeed, for t = 1, we have that T w0 f x (w) − f x (w 0) = O(w t − w 0), and w t − w 0 = η∂ w ρ(w 0)), which is typically very small (we use a weight decay coefficient in the order of 1e −4in our experimental settings). Therefore, we replace T w0 f x (w) by f x (w 0) in the above criterion, which becomes inexpensive since f x (w 0) is already computed by the forward pass. As can be seen in the previous primal-dual algorithms, taking a step in the dual can be decomposed into two stages: the initialization and the movement along the search direction. The initialization step is not informative about the optimization problem. Therefore, we discard it from the momentum velocity, and only accumulate the step along the conditional gradient (scaled by γ t η). This in the following velocity update: DISPLAYFORM0 In this section we provide the convergence plots of the different algorithms on the CIFAR data sets without data augmentation. In some cases the training performance can show some oscillations. We emphasize that this is the of cross-validating the initial learning rate based on the validation set performance: sometimes a better-behaved convergence would be obtained on the training set with a lower learning rate. However this lower learning rate is not selected because it does not provide the best validation performance. | We train neural networks by locally linearizing them and using a linear SVM solver (Frank-Wolfe) at each iteration. | 1,295 | scitldr |
In this paper, we show how novel transfer reinforcement learning techniques can be applied to the complex task of target-driven navigation using the photorealisticAI2THOR simulator. Specifically, we build on the concept of Universal SuccessorFeatures with an A3C agent. We introduce the novel architectural1contribution of a Successor Feature Dependent Policy (SFDP) and adopt the concept of VariationalInformation Bottlenecks to achieve state of the art performance. VUSFA, our final architecture, is a straightforward approach that can be implemented using our open source repository. Our approach is generalizable, showed greater stability in training, and outperformed recent approaches in terms of transfer learning ability. The human's ability of navigating unknown spaces (e.g. a firefighter finding the fire hydrant very quickly) primarily relies on visual perception, as well as on previous experience and heavy training . In robotics, we would like to mimic this human behaviour. The advancement of visual navigation algorithms essentially contribute to the prevalence and mobility in robotics and therefore, many different approaches are being explored. Previous research has studied map-based, map-building, and map-less approaches (; ;). In the past, map-based and map-building approaches have been favoured. However, they heavily depend on an accurate mapping of the environment. Also, it requires a carefully executed human-guided training phase which limits its generalizability . With recent advances in Deep Reinforcement Learning (DRL) (;, map-less navigation has experienced major advancements). It has been demonstrated that DRL-based methods are now able to solve navigation tasks in a more human-like manner . Research has shown that DRL-based navigation, in particular target driven visual navigation, is still a challenging task especially when targets are represented in the form of visual information that is highly dynamic. In previous navigation paradigms, the agent navigates to a target demonstrating specific properties (e.g. a yellow cone, such as in the case of), whose location may change over time. In contrast, in target driven visual navigation, the agent should be able to learn to navigate in a persistent state space to a dynamic set of goals. The agent is required to learn to navigate when both the goal and the current state are presented as visual images. A current challenge for DRL algorithms is learning new tasks or goals that vary from what the agent was initially trained for. This ability is called transfer learning. There are two popular strategies for achieving transfer learning in DRL, either by using the concept of General Value Functions (GVF) or by using Successor Feature Approximation (SFA) . For the task of target driven visual navigation, demonstrated that an A3C agent using the concept of GVF can improve the transfer learning ability. GVF does not however allow us to easily see the underlining process of learning the dynamics of tasks and GVF agents also frequently struggle in complex environments . The second strategy, applying SFA, enables us to capture the dynamics of the environment by attempting to learn future state visitations, although these also encounter limitations when facing multiple tasks. Universal Successor Features Approximators (USFA), which is an extension of SFA, is able to consider multiple tasks and can improve the transfer learning ability of the agent. In summary, our research contribution is threefold: • For the first time in the literature, we apply Universal Successor Feature Approximators (USFA) for the complex task of target driven visual navigation. Our new approach provides a stable training mechanism and enhances the transfer reinforcement learning ability in complex environments. • We introduce the concept of a Successor Feature Dependant Policy (SFDP), a novel architectural contribution in which the policy can directly make use of the information presented by USFA (an abstract map in our case). This important add-on significantly improves the transfer learning ability of the DRL agent. • Finally, we contribute Variational Universal Successor Feature Approximators (VUSFA), by adopting the concept of Variational Information Bottlenecks. We show that this combination works stably with complex tasks such as target driven visual navigation in the photo-realistic AI2THOR environment. Besides stable convergence, our approach shows possible ways in which transfer learning could be improved in the future. Transfer in reinforcement learning agents can be described as the ability of an agent to generalize over different tasks while sharing knowledge between them. In this paper we tackle the problem of transfer reinforcement learning with respect to Successor Feature Reinforcement Learning (SF-RL) . Specifically we develop on the concept of Universal Successor Feature RL (USF-RL) (b) which is an extension of SF-RL. In the next section we will first introduce basic concepts where we have used throughout our research. We can formalize the goal-directed navigation task as a Markov Decision Process (MDP). The transition probability p(s t+1 |s t, a t) defines the probability of reaching the next state s t+1 when action a t ∈ A is taken in state s t ∈ S. For any goal g ∈ G (in our case G ⊆ S), we define a goal dependent reward function r g (s t, a t, s t+1) ∈ R and a discount function γ g (s t) ∈ (for terminal state, γ g = 0). For any policy π(a t |s t), a GVF can be defined as follows: The assumption for any goal g is that there exists an optimal value function V π * g g (s), which is evaluated according to a goal oriented optimal policy π * g. The general aim of agent's learning is to find the optimal policy π * that maximises the future discounted rewards starting from s 0 and following π *. To generalize over the goal space G, the agent needs to learn multiple optimal policies as well as optimal value functions in order to navigate to a goal. Each goal is considered a new task and the agent should be able to quickly adapt to find V π * g g (s) and π * g. Universal Successor Features (USF) (b) in an extension of the idea of Successor Features (SF) described in and. Similar to the concept of SF, USF also follows the idea that the immediate scalar reward r g can be defined as a linear combination of state representations φ and a goal dependent reward prediction vector ω g as in Equation 2. In the Equation 2, φ (s t, a t, s t+1) represents the dynamics or the physical features the agent sees when transitioning between states s t and s t+1 after taking an action a t. We approximate φ (s t, a t, s t+1) as φ (s t+1) following Ma et al. (2018a); since it is convenient for the agent to rely on the state representation of the new state φ (s t+1) to recover the scalar reward r g rather than trying to capture physical features of transition dynamics. This allows us to describe the value function as a cumulative sum of the discounted φ as follows: where ψ π g (s t) is defined as the Universal Successor Features (USF) of state s t (b). Intuitively, ψ π g (s t) can be thought of as the expected future state occupancy. Unlike traditional Successor Feature Approximation, USF is based on both the state and the goal. The value function defined with USFA has similar properties to GVFs. The modified V π g (s) with ψ incorporates shared dynamics between tasks. Learning the USFA is accomplished in the same way as the value function update by using the following TD (Temporal Difference) error: As illustrated by (b) the vectors ψ π g (s t), φ (s t+1) and ω g can be approximated by neural networks parameterized by θ π, θ φ and θ ω. In this paper we incorporate the concept of USF with an A3C agent and trained all three sets of parameters jointly. The USFA model introduced by Ma et al. (2018b) extends the concept of SF generalized over multiple tasks with the actor-critic algorithm. However, their method is yet to be evaluated with complex tasks such as target driven visual navigation. In this section we point out the challenges when adapting Ma et al.'s model to complex tasks. The state representation φ vector mentioned in the USF architecture plays a crucial role. It decouples the scalar reward r t and learns the USFA ψ π g (s t) with the TD-error loss function (Equation 4). The authors Ma et al. (2018b) propose learning φ using an autoencoder prior to the main reinforcement learning algorithm. φ is supposed to capture the salient information about each state s t, but when the states consist of complex visual information such as photo-realistic images, defining an optimal φ representation with an autoencoder can be problematic. Training a convolutional autoencoder which generalize over many states is often prone to over-fitting. Training ω g by a regression loss with respect to a scalar reward r t (Equation 2) can also be problematic. The main reason is that this loss is not informative enough to train ω g because during the initial stages of training, the agent will observe very small negative rewards and rarely see the large positive reward goal locations. ω g when decoupled from the scalar reward, captures information about the goal. Ma et al. (2018b) propose to train ω g with a separate neural network that uses goal features as input. Training a separate network in our scenario easily leads to over-fitting on the limited number of trained goal locations seen by the agent during training and leads to poor generalization of the agent to new unknown goals. Our first contribution is the application of USFA for the complex task of target driven visual navigation. This section introduces how we created a stable architecture while addressing the aforementioned issues. Rather than using a separate network such as an autoencoder to generate φ, we argue it is more beneficial if the agent learns task dependant state representation features while exploring the state space. Thereby, the agent should learn to capture only the salient features relevant to the task of navigation and ignore features that may be solely important for reconstruction. Since in target driven visual navigation, the goal space is a subset of the state space, we used a siamese network to generation both φ and ω g. A major problem with training a stable USFA based DRL agent is the difficulty of training an ω g that works successfully with the scalar reward regression loss (see Equation 2). In our case, the reward structure is ad-hoc: for every time-step the agent either receives a small negative penalty or a large reward for reaching the goal location. The positive rewards are experienced by the agent much less frequently, particularly at the beginning of training. When training large scale tasks, this class imbalance can be even more detrimental because the reward structure needs to be learnt before the reinforcement learning agent is able to learn a meaningful policy for navigation. If not, this creates an unstable ω g which can cause the network to diverge. To overcome this problem, we propose to exploit the A3C agent's critic update (Value function update). In a conventional A3C algorithm, each agent needs to optimise the critic and policy functions after each episode with the N-step Return . Since the value function can be interpreted by the USFA concept as being a linear combination of ψ g (s t) and ω g, the critic's loss function in an A3C agent can be used to learn ω g. Unlike training the network with a scalar reward regression loss, this method is more informative because the loss function depends on the episode's discounted scalar rewards. The discounted return calculation for a single episodic step in A3C is depicted in Algorithm 1 in the Supplementary Materials. Equation 5 shows how the value function can be decoupled with ψ and ω. Equation 6 shows the conventional one step TD loss for the SFA branch. It needs to be highlighted that ψ gets updated with both Loss ψ T D and Loss V T D. To counter the problem of the having only a few training goals to train ω, we utilised the embeddings generated from the Siamese layer as goal information and trained ω as another branch of the USFA-A3C agent (see Figure 5 in the Supplimentary Materials). Our second contribution is the addition of a Successor Feature Dependant Policy (SFDP) to the USFA implementation. As mentioned before, ψ g (s t) can be seen as an abstract representation of the cumulative sum of the future states the agent will visit by following an optimal policy . Traditionally, successor features are not directly consulted when determining an action (b). However, we hypothesise that feeding the abstract map of future states could be useful in determining the next action. USF can be described as representing the cumulutive sum of discounted future states the agent visits following an optimal policy. This property by itself helps with transfer learning because eventhough different goals have different optimal paths, they can share some common sub-paths. For example, when tasked with finding the microwave and sink in a kitchen, the initial steps of the agent in going to the kitchen will be similar for both tasks. We hypothesised that if the policy has direct access to the USF (see Equation 7), the agent will be able to learn from these similar paths. By directly concatenating ψ g with the final layer of the policy head naively in ψ g being updated with gradients from the conventional bellman optimality Equation 3 and the policy gradients Figure 1: Proposed Network Architecture "VUSFA": The model's input is the current state of the agent s t and the goal location g as images. These go through a shared simaese encoder E(z|s t). The reparametrized output z is used to train the ω vector. The policy is conditioned on the USF vector (dotted line indicates gradients do not flow from policy to the USFA head). The USFA ψ is trained with the temporal difference error using φ to give the expected future state occupancies. The discounted episode return is used to train both ω and USFA vectors. of the A3C agent. This can harm the true USF representation and can reduce the transfer learning capabilities of the agent. Therefore in the final model, we stopped the gradient flow from the policy head to the USF branch. The stopping of policy gradients for the USF branch is illustrated in Figure 1 with dotted lines. The next modification we made was the introduction of the Variational Siamese Bottleneck (VSB) to improve the quality of USFA and the reward prediction vector. We observed that the embeddings generated by the siamese layers play a key role in improving the overall performance due to their effect on generating ψ and ω g. We wanted to improve these embeddings without harming stable convergence. Our main hypothesis in selecting the variational information bottleneck was that it will be able to guide the Siamese layers to extract the most informative and meaningful features which will then lead to better generalisation. Having a siamese layer that generates embeddings which produce a robust φ and ω g is key to improving the transfer learning ability of the overall model. If these embeddings are not informative enough, the model can overfit to the training set of goals. To improve the embeddings without harming the training stability, we adapt the concept of Deep Variational Information Bottleneck . Our show that this addition improves the performance of the overall network. In the next sections we will describe the theory behind the Variational Information Bottleneck and the training procedure we used in our adaptation of it to the VUSFA agent. The theory of the Information Bottleneck, introduced by , shows that a deep neural network can be thought of as a trade-off between having a compressed latent representation Z with respect to inputs X while still preserving relevant information with respect to the outputs Y. Mathematically, this idea of generating an optimal Z can be achieved using Mutual Information Theorem as follows: Where I(X; Z) is the mutual information between the input features X and the latent representation Z from the hidden layer and I(Y ; Z) is the mutual information between output Y and Z. Intuitively, the neural network should predict the output while reducing the mutual information between input and the encoding. The minimisation of I(X; Z) encourages the agent to compress the most relevant information about X into Z for the prediction of Y. Deep Variational Information Bottlenecks introduced by is a parameterized approach to the Information Bottleneck theory that can be easily used with deep neural networks. This was done by introducing a regularized objective function as follows. Minimizing the loss function J(q(Y |Z), E(Z|X)) encourages the neural network to generate an informative compressed embedding Z from the input X. Equation 9 consists of a parametric encoder function E(Z|X) that maps input features X into latent vector Z, a decoder function q(Y |Z) that maps Z to output labels Y, and a mutual information constraint I(X, Z) ≤ I c. The generation of Z by the encoder under the Information Constraint I c can be though of as a bottleneck layer in the network. This bottleneck Z could be applied as any intermediate layer of the neural network. In our scenario, we applied the Information Bottleneck on the output of the siamese layers (see Figure 1) due to it's direct effects on the estimations of π, ψ, and ω. The siamese layers can therefore be thought of as the encoder E(Z|X). We call this the Variational Siamese Bottleneck (VSB). The VSB enforces an upper-bound I c on the mutual information term I(Z, X) to encourage the encoder E(Z|X) to focus on the most discriminative features of the input. The encoder needs to generate a latent distribution Z where the mutual information between X and Z does not exceed the scalar Information Constraint I c where I c is a hyperparamter. Since the I(Z, X; θ) ≤ I c term in Equation 9 is intractable, we cannot directly apply it to a neural network trained with back-propagation. introduced a modified version by applying a variational lower bound and a lagrangian multiplier β that needs to be updated adaptively. This in the final deep variational information bottleneck loss in Equation 10. Since the KL divergence is calculated between two distributions, the encoder outputs the mean and variance of Z from which a sample is taken using the reparametrization trick. We update the Lagrangian multiplier β in a similar way to. β gets updated for each actor thread adaptively following Equation 11. The final loss function of our agent with the Variational Information Bottleneck is shown in the Equation 12. -where L total is the combined loss function Therefore, the agent needs to minimize both L total and the KL divergence term [KL[E(z|x) r(z)] − I c ] at the same time. λ π,λ ψ and λ V are hyperparameters. Our network (see Figure 1) takes the four most recent states the agent has visited as s t and four repeated goal states as the g. Then the resnet-50 embeddings related to both s t and g go through a siamese encoder and generates the mean and variance vectors, which are the parameters of the latent distribution Z. The φ embeddings with respect to the goal g then go through a fully connected layer to predict the goal coefficients ω vector. The network's last part predicts the policy π(a t |s t, g, ψ) and the USFA ψ g (s t). Similar to, we include a separate policy (π), USFA ψ and the expected sum of future rewards prediction heads for each scene (e.g. Bathroom). 6.1 TRAINING VUSFA The training procedure for our model is based on the A3C algorithm and is shown in Algorithm 1. The reparameterized embedding was not directly used in predicting the policy π and the USFA ψ to maintain a stable procedure. Instead, the mean vectors of the state representation φ for both goal and state were used. These mean vectors were concatenated together and fed through the layers used for predicting the policy and USFA as shown in Figure 1. We used the reparameterized embeddings from the bottleneck layer to predict ω since the ω vector is the most important element in the USF architecture that decouples the value function. The objective behind this reparametrization was to make create an ω that is robust and generalizable that would not easily overfit. During inference, we use reparameterized values for both goal and state encoding which we assume added more generalizability and exploration and improved zero-shot navigation of the agent. The evaluation of the agent under the task of target driven visual navigation has been conducted in two ways. First, the agent was evaluated on its zero-shot learning ability. The second evaluation criteria was the time taken for the agent to adapt to new unknown goals when fine-tuning. Both evaluation criteria belong to the domain of Transfer in Reinforcement Learning and will be described in the following two sections. Prior to evaluation, all models were trained on four scenes for 20 different goals until convergence. We took the deep Siamese A3C model by as the baseline, since it is the most relevant work done using the AI2THOR simulator. Moreover, we were also successful in training the agent from scratch with a CNN (replacing the resnet features). Although adding an LSTM in further performance increases, we used resnet features instead. We decided to do so to keep our training time low and the evaluation consistent. We evaluated all variations of our proposed model in a benchmark with the state-of-the-art: Model 04: Adding VSB to Model 03 (we call this VUSFA) The aim of zero-shot navigation is to see weather the agent is be able to reach a wide range of goals while being trained on only a very limited subset of goals. In particular, the zero-shot learning capability of the agent was evaluated by testing the agent's average successful attempts to find new goal locations. In the evaluation process, we follow a similar criteria to, in which we tested whether the agent was able to reach the goal in less than 500 time-steps. We constituted this as a successful episode. We evaluated the success rate of reaching all goals in each environment. We repeated this procedure 10 times (trials), in which the agent always started from a random location. We trained our models on only 20 goal states spread evenly across four environments using the AI2THOR simulator. This represents less than 1.2% of the total number of states. In-spite of this, even the worst performing model was able to generalize to over 16% of all states. Table 1 shows that all proposed algorithms (Model 02-04) are able to successfully reach more locations without training than the baseline Model 01. The USFA-based policies consistently generalise better than. It can take a large amount of time (in the order of several days), to train a complex DRL agent. Therefore, it is impractical to re-train agents when the task slightly changes. Instead, the agent should be able to use previous knowledge to adapt to new tasks quickly. We evaluated the transfer learning ability of all four models to 20 new goals. In order to evaluate how the closeness of the new goals effect the agent's performance, we tested the models on states that are 1, 2, and 4 steps away from already trained goals as well as completely random goals. We sampled 5 random states from each environment, excluding the already trained goals to get the new goals. We used 5 trials, meaning repeating this process 5 times with random states which are different to the previously learned ones. To ensure a fair comparison, we kept the random seeds constant between the models. Figure 2 shows the number of time-steps required for the model to adapt to new goals. It becomes clear that the USFA-based policies are consistently able to decrease the number of steps taken to reach the goal faster than the baseline model. Moreover, using the SFDP with USFA ed in a further decrease in time-steps required and thus showed to have a positive effect on the model's transfer learning ability. As shown in Figure 2, VUSFA is usually able to further improve performance. We proposed Variational Universal Successor Features Approximator (VUSFA) to solve rather complex tasks, such as target driven visual navigation in photorealistic environments using the AI2THOR simulator. To our knowledge, this is the first time the Deep Variational Information Bottleneck theory has been applied with Universal Successor Features in Deep Reinforcement Learning. Our indicate that VUSFA is able to improve the transfer learning ability in respect to previous state-of-the-art GVF and USF-RL based research. Our approach is generalizable and can be easily adapted to various tasks other than navigation. For re-implementation, we provide the source code via our github repository 1. Our approach introduces a new perspective and should be considered in future research aiming to improve transfer learning for Deep Reinforcement Learning. In particular, further research could look into exploration of the semantical impacts of φ, ω, and ψ. Algorithm 1 Variational A3C-USF Algorithm pseudocode for each actor thread Assuming global parameter vectors for value function and policy as θ π, θ ψ and θ ω Assuming global shared counter as T = 0 Assuming thread specific parameter vectors for value function and policy as θ π, θ ψ and θ ω d Initialize thread step counter t ← 1 1: repeat 2: Reset gradients: Synchronise thread specific parameters with global network θ π = θ π, θ ψ = θ ψ, θ ω = θ ω 4: Get initial state s t, goal g 6: Perform an action a t according to the current policy π(a t |s t, g : θ π) Receive the scalar reward r t,the new state s t+1 Collect roll-outs [s t, r t, s t+1] t ← t + 1 11: until terminal s t or t − t start == t max Bootstrapping the return R from last state of the episode 14: for i ∈ t − 1,...,t start do 17: Computing dθ ψ,dθ π and dθ ω, perform asynchronous update of θ ψ,θ π and θ ω 28: until T > T max We used the fundamental A3C framework to train our agent. The initial A3C agent contains 100 threads assigned to 20 goals (5 goals from each scene). The 20 goals used for training can be seen in Figure 3. For the transfer learning tasks, 20 new goals were randomly selected excluding the trained goals. An ideal trajectory of an agent is illustrated in the Figure 4. For training, we used a high performance computer with 4x Xeon 6136 @3Ghz Processor (total of 48 cores); Memory -1.18TB, GPU -Tesla V100 with 32GB memory per GPU. We did limited hyperparameter tuning as detailed in Table 2. λ V = 0.5, λ ψ = 0.0005 and I c = 0.2 gave good and were used in the plots. Values Explored Values Used Figure 5: Architecture used for the SFDP-A3C model The input to the model is the current state of the agent s t and the goal location g as images. These go through a shared siamese layer to generate embeddings for the goal and the state. The policy is conditioned on the USF vector (dotted line indicates gradients do not flow from policy to the USFA head). The USFA ψ is trained with the temporal difference error using φ to give the expected future state occupancies. The discounted episode return is used to train both ω and USFA vector. Designing a value function that is capable of adapting to different tasks is based on a concept known as General Value Functions (GVF) . An extension to this idea, called Universal Value Function Approximators (UVFAs), was introduced by. The core idea of UVFAs is to represent a large set of optimal value functions by a single, unified function approximator that generalises over both states and goals. Although theoretically sound, learning a sufficiently working UVFA is a challenging task (b). Successor Representation (SR) emerged from the field of cognitive science and is modelled on the brain's capability to create a reusable predictive map. Recently, SR was combined with Deep Learning to create Deep Successor Reinforcement Learning by. Based on Deep Q-Network (DQN) fundamentals, Kulkarni et al. (2016 were able to learn task-specific features that were able to quickly adapt to distal changes in the reward by fine-tuning only the reward prediction feature vector. Transfer in RL has been evaluated on multiple similar tasks by , who introduced Successor Features (SF). They adapted SR to be applicable in the continuous domain and were able to show how classic policy improvement can be extended to multiple policies. to Deep Learning, additionally showed how these models can be stably trained. For the problem of visual navigation, a SR-based DRL architecture similar to was used by. Unlike our approach, they showcase their solution in simple maze-like environments using DQN as the baseline method, while we use actor-critic methods in a photorealistic simulation environment. DQN-based techniques frequently suffer from stability issues when applied to complex problems, such as large-scale navigation. USFA (b) is the most recent extension to SR. Unlike previous methods, which were based on DQN, USFA learns a policy directly by modeling it with actor-critic methods. Similar to SR, USFA modifies the policy with successor features. DQN-based approaches learn an optimal action-value function indirectly. USFA attempts to obtain a GVF which can be directly used to obtain an optimal policy. It can be seen as a combination of the SF and UVFA methods as discussed earlier. Unlike methods based on DQN, USFA is able to directly optimize an agent to learn multiple tasks simultaneously. However, USFA has not been tested on high-dimensional complex problems. This paper shows the adaption of USFA to more complex task for target driven visual navigation in a photorealistic environment. also extended the idea of SF-RL in to USFA by adapting the concepts of GVF and generalized policy improvement theorem. work describes a USFA training method by adapting ε-greedy Q learning for set of simple tasks in Deepmind-Lab simulator. We found their method is hard to adapt to our problem domain mainly due to defining the state representation vector as a linear combination of a set of tasks. This method can work with a simple set of known tasks such as collecting different objects in the simulator. But in our case, each new task is similar to a new goal location selected from the state space. Also, the authors have not extended this algorithm to work with policy gradient methods. Similar to the UVFA , a goal-dependant SF is approximated with Universal Successor Feature Approximators (USFA) (b; . The goal dependant USF obeys the bellman update rule similar to SF (Equation 13).. In Equation 14 ψ π (s t, g t ; θ ψ), ω(g t ; θ ω) and φ (s t, a t, s t+1 ; θ φ) should generalize over both goals and states. The fundamental architecture to learn USF was proposed by Ma et al. (2018b) and is shown in Figure 6. The architecture consists of three sub-networks that need to be trained. The first one is the state representation network with parameters Φ. This network is an autoencoder which gets trained before the training of the DRL agent. Then there are two heads to approximate the USF (θ π), and policy (θ ψ). These two heads share parameters in the early layers with a fork in the last fully connected layer. Finally, θ w is learned. This final reward prediction network should be able to generate ω for different goals which will later be used to train the USF agent. It is important to compare the architectural differences between the USF-RL methods (b) with the previous DQN based SF-RL methods . Unlike in SF-RL, φ is not learned simultaneously while training the DRL agent. The φ prediction network in the USF-RL agent is trained with an autoencoder separately prior to the training phase. In contrast to the SF-RL methods which stores an ω vector for each task, in USF-RL the ω vector is predicted with a network that takes information about the goal as the input. Maintaining a separate network to generate ω given goal information gives the luxury for the agent to quickly adapt to novel goals. In Equation 15 the term I(X, Z) is the mutual information between the features X and the hidden layer Z. I(Y, Z) represents the mutual information between output Y and the hidden layer Z. Intuitively, Equation 15 says that the neural network should predict the label while reducing the mutual information between the input and the encoding. Reducing of the I(X, Z) term encourages the agent to compress the X into Z where Z consists of only the most important information about the input features. The β is the trade-off hyperparameter which controls the quality of Z. The mutual information term I(X, Z) between input features and the encoder embeddings can be illustrated in Equation 16. I(X, Z) = p(x, z) log(p(x, z) p(x)p(z) )dxdz = p(x)E(z|x) log(E(z|x) p(z) )dxdz In Equation 16, the joint distribution of p(x, z) can be illustrated with the encoder E(z|x) and the input distribution p(x). Another important factor in Equation 16 is the distribution of the latent variable p(z) is intractable. So the p(z) term is replaced with a known prior distribution r(z). This assumption introduces an upper-bound to the mutual information term I(X, Z) as mentioned in Equation 17. I(X, Z) ≤ p(x)E(z|x) log(E(z|x) r(z) )dxdz In Equation 17, the sum of the probability distribution of input X can be transformed into an expectation operation. The expectation operation simplifies the I(X, Z) into a KL divergence between the distribution of Z generated by the parametric encoder and the approximated prior r(z). Modification of the I(X, Z) with the KL divergence makes it easier to train a neural network with the loss function mentioned in Equation 9. The interpretation of I(X, Z) with KL divergence allow us to modify the conditional loss function β that needed to be updated in an adaptive manner with the training procedure of the neural network. Moreover, Alemi et al. evaluated the method of supervised learning tasks and showed that the models trained with a VIB could be less prone to be overfitting and more robust to adversarial examples. In this paper, we successfully illustrated the adoption of the concept of Deep VIB with the USF to substantially improve the transfer learning ability of a navigation agent. | We present an improved version of Universal Successor Features based DRL method which can improve the transfer learning of agents. | 1,296 | scitldr |
A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables. This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. In this work, we propose to overcome this trade-off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. As a , the obtained models are effective at both disentangling and reconstruction. We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations. In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model. This is an order of magnitude more disentangled factors than state-of-the-art methods, while obtaining visually similar or superior , and avoiding adversarial training. A desired characteristic of deep generative models is the ability to output realistic images while controlling one or more of the factors of variation underlying the image formation. Moreover, when each unit in the model's internal image representation is sensitive to each of these factors, the model is said to obtain disentangled representations. Learning such models has been approached in the past by training autoencoders where the latent variables (or a subset of them) are constrained to correspond to given factors of variation, which can be specified (supervised) or learned from the data (unsupervised) BID22 BID29 BID15. The remaining latent variables are typically considered nuisance variables and are used by the autoencoder to complete the reconstruction of the image. There exists one fundamental problem when learning disentangled representations using autoencoders, sometimes referred to as the "shortcut problem" BID29. If the dimension of the latent code is too large, the decoder ignores the latent variables associated to the specified factors of variation, and achieves the reconstruction by using the capacity available in the nuisance variables. On the other hand, if the dimension of the latent code is small, the decoder is encouraged to use the specified variables, but is also limited in the amount of information it can use for reconstruction, so the reconstructed image is more distorted with respect to the autoencoder's input. BID29 showed that this trade-off between reconstruction and disentangling can indeed be traversed by varying the dimension of the latent code. However, no principled method exists to choose the optimal latent code dimension. The shortcut problem was also addressed by using additional mechanisms to make sure the decoder output is a function of the specified factors in the latent code. One approach, for example, consists in swapping the specified part of the latent code between different samples, and using adversarial training to make sure the output distribution is indeed conditioned to the specified factors BID22 BID19 BID29. However, adversarial training remains a difficult and unstable optimization problem in practice. Based on these observations, we propose a method for avoiding the shortcut problem that requires no adversarial training and achieves good disentanglement and reconstruction at the same time. Our method consists in first training an autoencoder model, the teacher, where the dimension of the latent code is small, so that the autoencoder is able to effectively disentangle the factors of variation and condition its output on them. These factors can be specified in a supervised manner or learned from the data in an unsupervised way, as we shall demonstrate. After the teacher model is trained, we construct a student model that has a larger latent code dimension for the nuisance variables. For the student, we optimize the reconstruction loss as well as an additional loss function that constrains the variation of the output with respect to the specified latent variables to be the same as the teacher's. In what follows, we consider autoencoder models (E, D), that receive an image x as input and produce a reconstructionx: D(E(x)) =x. We consider that the latent code is always split into a specified factors part y ∈ R k and a nuisance variables part z ∈ R d: E(x) = (y, z), D (y, z) =x. Consider a teacher autoencoder (E T, D T), with nuisance variables dimension d T, and a student DISPLAYFORM0 Because the dimension of the nuisance variables of the student is larger than in the teacher model, we expect a better reconstruction from it (i.e. ||x −x S || < ||x −x T ||, for some norm).At the same time, we want the student model to maintain the same disentangling ability as the teacher as well as the conditioning of the output on the specified factors. A first order approximation of this desired goal can be expressed as DISPLAYFORM1 where j ∈ {1...H · W · C}, H, W and C are the dimensions of the output image, and i ∈ {1...k} indexes over the specified factors of variation. In this paper we propose a method to impose the first-order constraint in, which we term Jacobian supervision. We show two applications of this method. First, we propose an unsupervised algorithm that progressively disentangles the principal factors of variation in a dataset of images. Second, we use the Jacobian supervision to train an autoencoder model for images of faces, in which the factors of variation to be controlled are facial attributes. Our ing model outperforms the state-of-theart in terms of both reconstruction quality and facial attribute manipulation ability. Autoencoders BID12 are trained to reconstruct an input image while learning an internal low-dimensional representation of the input. Ideally, this representation should be disentangled, in the sense that each hidden unit in the latent code should encode one factor of variation in the formation of the input images, and should control this factor in the output images. There exist extensive literature on learning disentangled representations BID27 BID1 BID5 BID7 BID4 BID22 BID25 BID15 BID3.Disentangled representations have two important applications. One is their use as rich features for downstream tasks such as classification BID27 BID30 or semi-supervised learning. In the face recognition community, for example, disentanglement is often used to learn viewpoint-or pose-invariant features BID32 BID24 BID30. A second important application is in a generative setting, where a disentangled representation can be used to control the factors of variation in the generated image BID25 BID11 BID22 BID19. In this work we concentrate on the second one. In recent years, with the advent of Generative Adversarial Networks (GANs) BID9, a broad family of methods uses adversarial training to learn disentangled representations BID22 BID19 BID25 BID4. In a generative setting, the adversarial discriminator can be used to assess the quality of a reconstructed image for which the conditioning factors do not exist in the training set BID22 BID4.Another alternative, proposed in Fader Networks BID19, is to apply the adversarial discriminator on the latent code itself, to prevent it from containing any information pertaining to the specified factors of variation. Then, the known factors of variation or attributes are appended to the latent code. This allows to specify directly the amount of variation for each factor, generating visually pleasing attribute manipulations. Despite being trained on binary attribute labels, Fader Networks generalize remarkably well to real-valued attribute conditioning. However, despite recent advances BID10, adversarial training remains a non-trivial min-max optimization problem, that in this work we wish to avoid. Other remarkable disentangling methods that require no adversarial training are: BID5, where the cross-covariance between parts of the latent representation is minimized, so that the hidden factors of variation can be learned unsupervised and BID11; BID15; BID3 where a factorized latent representation is learned using the Variational Autoencoder (VAE) framework. In particular, the authors of BID3, propose to overcome the disentangling versus reconstruction trade-off by progressively allowing a larger divergence between the factorized prior distribution and the latent posterior in a VAE.Related to the task of varying the factors of image generation is that of domain-transfer BID6 BID8 BID20. Here the challenge is to "translate" an image into a domain for which examples of the original image are unknown and not available during training. For example, in the face generation task, the target domain can represent a change of facial attribute such as wearing eyeglasses or not, gender, age, etc. BID20 BID25 BID6. In this section we detail how the Jacobian supervision motivated in Section 1 can be applied, by ways of a practical example. We will use the Jacobian supervision to learn a disentangled image representation, where the main factors of variation are progressively discovered and learned unsupervised. We start with a simple autoencoder model, the teacher T, identified by its encoder and decoder parts (E T, D T). The output of the encoder (the latent code) is split into two parts. One part corresponds to the factors of variation y ∈ R k and the other part corresponds to the nuisance variables, z ∈ R d.We begin by using k = 2 and d = 0, meaning that the latent code of the teacher is only 2-dimensional. We consider the information encoded in these two variables as the two principal factors of variation in the dataset. This choice was done merely for visualization purposes FIG0 ).For this example, we trained a 3-layer multi-layer perceptron (MLP) on MNIST digits, using only the L2 reconstruction loss. We used BatchNorm at the end of the encoder, so that the distribution of y is normalized inside a mini-batch. In FIG0 (a) we show the of sampling this twodimensional variable and feeding the samples to the decoder D T. The ing digits are blurry, but the hidden variables learned to encode the digit class. Next, we create a student autoencoder model (E S, D S), similar to the teacher, but with a larger latent code. Namely, k = 2 and d = 1 instead of d = 0, so that the latent code has now an extra dimension and the reconstruction can be improved. In order to try to maintain the conditioning of the digit class by the 2D hidden variable y, we will impose that the Jacobian of the student with respect to y be the same as that of the teacher, as in. How to achieve this is described next. We take two random samples from the training set x 1 and x 2, and feed them to the student autoencoder, producing two sets of latent codes: (y the same pair of images to the teacher autoencoder to obtain y DISPLAYFORM0 Note that the teacher encoder in this case does not produce a z. We observe, by a first-order Taylor expansion, that DISPLAYFORM1 and DISPLAYFORM2 where J T and J S are the Jacobian of the teacher and student decoders respectively. Suppose y DISPLAYFORM3 and DISPLAYFORM4 then, by simple arithmetic, DISPLAYFORM5 where, since we assume holds, we dropped the superscripts for clarity. What FORMULA7 expresses is that the partial derivative of the output with respect to the latent variables y in the direction of (y 2 − y 1) is approximately the same for the student model and the teacher model. To achieve this, the proposed method consists essentially in enforcing the assumptions in FORMULA5 and FORMULA6 by simple reconstruction losses used during training of the student. Note that one could exhaustively explore partial derivatives in all the canonical directions of the space. In our case however, by visiting random pairs during training, we impose the constraint in for random directions sampled from the data itself. This allows for more efficient training than exhaustive exploration. Putting everything together, the loss function for training the student autoencoder with Jacobian supervision is composed of a reconstruction part L rec and a Jacobian part L jac: DISPLAYFORM6 Figure 2: 3 rd to 6 th principal factors of variation discovered by our unsupervised algorithm. The first two factors of variation are learned by the first teacher model FIG0 ). Each time a hidden unit is added to the autoencoder, a new factor of variation is discovered and learned. Each row shows the variation of the newly discovered factor for three different validation samples, while fixing all the other variables. The unsupervised discovered factors are related to stroke and handwriting style.where the subscript j indicates a paired random sample. For the experiments in FIG0 we used λ y = 0.25, λ dif f = 0.1. TAB4 in the appendix presents ablation studies on these hyperparameters. In practice, we found it also helps to add a term computing the cross-covariance between y and z, to obtain further decorrelation between disentangled features BID5: DISPLAYFORM7 where M is the number of samples in the data batch, m is an index over samples and i, j index feature dimensions, andz i andȳ j denote means over samples. In our experiments we weigh this loss with λ xcov = 1e −3.Once the student model is trained, it generates a better reconstructed image than the teacher model, thanks to the expanded latent code, while maintaining the conditioning of the output that the teacher had. The extra variable in the student latent code will be exploited by the autoencoder to learn the next important factor of variation in the dataset. Examples of factors of variations progressively learned in this way are shown in Figure 2.To progressively obtain an unsupervised disentangled representation we do the following procedure. After training of the student with k = 2, d = 1 is finished, we consider this model as a new teacher (equivalent to k = 3), and we create a new student model with one more hidden unit (equivalent to k = 3, d = 1). We then repeat the same procedure. Results of repeating this procedure 14 times, using 100 epochs for each stage are shown in FIG0. In FIG0 (b), we show how the ing final model can maintain the conditioning of the digit class, while obtaining a much better reconstruction. A model trained progressively until reaching the same latent code dimension but without Jacobian supervision, and only the cross-covariance loss for disentangling BID5, is shown in FIG0 (c). This model also obtains good reconstruction but loses the conditioning. For this model we also found λ xcov = 1e −3 to give the best . To quantitatively evaluate the disentangling performance of each model, we look at how the first two latent units (k = 2) control the digit class in each model. We take two images of different digits from the test set, feed them to the encoder, swap their corresponding y subvector and feed the fabricated latent codes to the decoder. We then run a pre-trained MNIST classifier in the generated image to see if the class was correctly swapped. The quantitative are shown in TAB0. We observe that the reconstruction-disentanglement trade-off is indeed more advantageous for the student with Jacobian supervision. To complement this section, we present of the unsupervised progressive learning of disentangled representations for the SVHN dataset BID23 in Section A.5 in the Appendix. In photographs of human faces, many factors of variation affect the image formation, such as subject identity, pose, illumination, viewpoint, etc., or even more subtle ones such as gender, age, expression. Modern facial manipulation algorithms allow the user to control these factors in the generative process. Our goal here is to obtain a model that has good control of these factors and produces faithful image reconstruction at the same time. We shall do so using the Jacobian supervision introduced DISPLAYFORM0 Figure 3: Diagram of the proposed training procedure for facial attributes disentangling. E and D always denote the same encoder and decoder module, respectively. Images x 1 and x 2 are randomly sampled and do not need to share any attribute or class. Their ground truth attribute labels areȳ 1 and y 2 respectively. The latent code is split into a vector predicting the attributes y and an unspecified part z. Shaded E indicates its weights are frozen, i.e., any loss over the indicated output does not affect its weights.in Section 3. In this more challenging case, the disentangling will be first learned by a teacher autoencoder using available annotations and an original training procedure. After a teacher is trained to correctly disentangle and control said attributes, a student model will be trained to improve the visual quality of the reconstruction, while maintaining the attribute manipulation ability. We begin by training a teacher model for effective disentangling at the cost of low quality reconstruction. Figure 3 shows a diagram of the training architecture for the teacher model. Let x ∈ R H×W ×3be an image with annotated ground truth binary attributesȳ ∈ {−1, 1} k, where k is the number of attributes for which annotations are available. Our goal is to learn the parameters of the encoder (Figure 3, top). Ideally, y ∈ R k should encode the specified attributes of x, while z ∈ R d should encode the remaining information necessary for reconstruction. DISPLAYFORM0 The training of the teacher is divided into two steps. First, the autoencoder reconstructs the input x, while at the same time predicting in y the ground truth labels for the attributesȳ. Second, the attributes part of the latent code y is swapped with that of another training sample (Figure 3, bottom). The randomly fabricated latent code is fed into the decoder to produce a new image. Typically, this combination of factors and nuisance variables is not represented in the training set, so evaluating the reconstruction is not possible. Instead, we use the same encoder to assess the new image: If the disentangling is achieved, the part of the latent code that is not related to the attributes should be the same for the existing and fabricated images, and the predicted factors should match those of the sample from which they were copied. In what follows, we describe step by step the loss function used for training, which consists of the sum of multiple loss terms. Note that, contrary to relevant recent methods BID22 BID19, the proposed method does not require adversarial training. Reconstruction loss. The first task of the autoencoder is to reconstruct the input image. The first term of the loss is given by the L2 reconstruction loss, as in.Prediction loss. In order to encourage y to encode the original attributes of x indicated in the ground truth labelȳ, we add the following penalty based on the hinge loss with margin 1: DISPLAYFORM1 where the subscript [i] indicates the i th attribute. Compared to recent related methods BID25 BID19, the decoder sees the real-valued predicted attributes instead of an inserted vector of binary attribute labels. This allows the decoder to naturally learn from continuous attribute variables, leaving a degree of freedom to encode subtle variations of the attributes. Cycle-consistency loss. Recall our goal is to control variations of the attributes in the generated image, with the ability to generalize to combinations of content and attributes that are not present in the training set. Suppose we have two randomly sampled images x 1 and x 2 as in Figure 3. After obtaining (y 1, z 1) = E(x 1) and (y 2, z 2) = E(x 2), we form the new artificial latent code (y 2, z 1). Ideally, using this code, the decoder should produce an image with the attributes of x 2 and the content of x 1. Such an image typically does not exist in the training set, so using a reconstruction loss is not possible. Instead, we resort to a cycle-consistency loss. We input this image to the same encoder, which will produce a new code that we denote as (y 2, z 1) = E T (D T (y 2, z 1)). If the decoder correctly generates an image with attributes y 2, and the encoder is good at predicting the input image attributes, then y 2 should predict y 2. We use again the hinge loss to enforce this: DISPLAYFORM2 Here we could have used any random values instead of the sampled y 2. However, we found that sampling predictions from the data eases the task of the decoder, as it is given combinations of attributes that it has already seen. Despite this simplification, the decoder shows remarkable generalization to unseen values of the specified attributes y during evaluation. Finally, we add a cycle-consistency check on the unspecified part of the latent code, z 1 and z 1: DISPLAYFORM3 Encoder freezing. The training approach we just described presents a major pitfall. The reversed autoencoder could learn to replicate the input code (y 2, z 1) by encoding this information inside a latent image in whatever way it finds easier, that does not induce a natural attribute variation. To avoid this issue, a key ingredient of the procedure is to freeze the weights of the encoder when backpropagating L cyc1 and L cyc2. This forces the decoder to produce a naturally looking image so that the encoder correctly classifies its attributes. Global teacher loss. Overall, the global loss used to train the teacher is the sum of the five terms: DISPLAYFORM4 where λ i ∈ R, i = 1: 5 represent weights for each term in the sum. Details on how their values are found and how we optimize in practice are described in the next section. Ablation studies showing the contribution of each loss are shown in Section A.3 in the appendix. Student training. After the teacher is trained, we create a student autoencoder model with a larger dimension for the nuisance variables z and train it using only reconstruction and Jacobian supervision ( and ), as detailed in the next section. We implement both teacher and student autoencoders as Convolutional Neural Networks (CNN). Further architecture and implementation details are detailed in the Appendix. We train and evaluate our method on the standard CelebA dataset BID21, which contains 200,000 aligned faces of celebrities with 40 annotated attributes. The unspecified part of latent code (z) of the teacher autoencoder is implemented as a feature map of 512 channels of size 2×2. To encode the attributes part y, we concatenate an additional k = 40 channels. At the output of the encoder the values of these 40 channels are averaged, so the actual latent vector has k = 40 and d = 2048, dimensions for y and z respectively. The decoder uses a symmetrical architecture and, following BID19, the attribute prediction y is concatenated as constant channels to every feature map of the decoder. We perform grid search to find the values of the weights in FORMULA1 by training for 10 epochs and evaluating on a hold-out validation set. The values we used in the experiments in this paper are λ 1 = 10 2, λ 2 = 10 −1, λ 3 = 10 −1, λ 4 = 10 −4, λ 5 = 10 −5. At the beginning of the training of the teacher, the weights of the cycle-consistency losses λ 4 and λ 5 are set to 0, so the autoencoder is only trained for reconstruction (L rec), attribute prediction (L pred) and linear decorrelation (L cov). After 100 training epochs, we resume the training turning on L cyc1 and L cyc2 and training for another 100 epochs. At each iteration, we do the parameter updates in two separate steps. We first update for DISPLAYFORM0 Then, freezing the encoder, we do the update (only for the decoder), for DISPLAYFORM1 After the teacher autoencoder training is completed, we create the student model by appending new convolutional filters to the output of the encoder and the input of the decoder, so that the effective dimension of the latent code is increased. In this experiment, we first doubled the size of the latent code from d = 2048 to d = 4096 at the 200 th epoch and then from d = 4096 to d = 8192 at the 400 th epoch. Note that this is different to the experiment of Section 3, where we grew d by one unit at at time. We initialize the weights of the student with the weights of the teacher wherever possible. Then, we train the student using the reconstruction loss and the Jacobian loss as defined in Section 3, using λ y = 1, λ dif f = 50, and no prediction nor cycle-consistency loss (λ 2 = λ 4 = λ 5 = 0). The hyperparameters were found by quantitative and qualitative evaluation on a separate validation set. From CelebA, we use 162,770 images of size 256x256 for training and the rest for validation. All the figures in this paper show images from the validation set and were obtained using the same single model. For each model, we evaluated quantitatively how well the generated image is conditioned to the specified factors. To do this, for each image in the CelebA test set, we tried to flip each of the disentangled attributes, one at a time (e.g. eyeglasses/no eyeglasses). The flipping is done by setting the latent variable y i to −α · sign(y i), with α > 0 a multiplier to exaggerate the attribute, found in a separate validation set for each model (α = 40 for all models).To verify that the attribute was successfully flipped in the generated image, we used an external classifier trained to predict each of the attributes. We used the classifier provided by the authors of Fader Networks, which was trained directly on the same training split of the CelebA dataset. Table 2 and Figure 4 show the quantitative we obtained. Most notably, at approximately the same reconstruction performance, the student with Jacobian supervision is significantly better at flipping attributes than the student without it. With the Jacobian supervision, the student maintains almost the same disentangling and conditioning ability as the teacher. Note that these numbers could be higher if we carefully chose a different value of α for each attribute. To the best of our knowledge, Fader Networks BID19 constitutes the state-of-the-art in face image generation with continuous control of the facial attributes. For comparison, we trained Fader Networks models using the authors' implementation with d = 2048 and d = 8192 to disentangle the same number of attributes as our model (k = 40), but the training did not converge (using the same provided optimization hyperparameters). We conjecture that the adversarial discriminator acting on the latent code harms the reconstruction and makes the optimization unstable. Comparisons with these models are shown in Table 2 and in FIG4 in the appendix. We also show Table 2: Quantitative comparison of the disentanglement and reconstruction performance of the evaluated models in the facial attribute manipulation task. Disentanglement is measured as the ability to flip specified attributes by varying the corresponding latent unit. Figure 4: Disentanglement versus reconstruction trade-off for the facial attribute manipulation example (top-left is better). The disentangling score measures the ability to flip facial attributes by manipulating the corresponding latent variables.in FIG2 that our multiple-attribute model achieves similar performance to the single-attribute Fader Networks models provided by the authors. Finally, FIG1 shows the of manipulating 32 attributes for eight different subjects, using the student model with Jacobian supervision. Note that our model is designed to learn the 40 attributes, however in practice there are 8 of them which the model does not learn to manipulate, possibly because they are poorly represented in the dataset (e.g. sideburns, wearing necktie) or too difficult to generate (e.g. wearing hat, wearing earrings). A natural trade-off between disentanglement and reconstruction exists when learning image representations using autoencoder architectures. In this work, we showed that it is possible to overcome this trade-off by first learning a teacher model that is good at disentangling and then imposing the Jacobian of this model with respect to the disentangled variables to a student model that is good at reconstruction. The student model then becomes good at both disentangling and reconstruction. We showed two example applications of this idea. The first one was to progressively learn the principal factors of variation in a dataset, in an unsupervised manner. The second application is a generative model that is able to manipulate facial attributes in human faces. The ing model is able to manipulate one order of magnitude more facial attributes than state-of-the-art methods, while obtaining similar or superior visual , and requiring no adversarial training. For the autoencoder utilized for experiments in Section 3, we used the following architecture. For the encoder: DISPLAYFORM0 where F (I, O) indicates a fully connected layer with I inputs and O outputs. For the first teacher model (k = 2, d = 0), we also used BatchNorm after the encoder output. The decoder is the exact symmetric of the encoder, with a Tanh layer appended at the end. We used Adam with a learning rate of 3e −4, a batch size of 128 and weight decay coefficient 1e −6. Following BID19, we used convolutional blocks of Convolution-BatchNorm-ReLU layers and a geometric reduction in spatial resolution by using stride 2. The convolutional kernels are all of size 4×4 with padding of 1, and we use Leaky ReLU with slope 0.2. The input to the encoder is a 256×256 image. Denoting by k the number of attributes, the encoder architecture can be summarized as: DISPLAYFORM0 where C(f) indicates a convolutional block with f output channels. The decoder architecture can be summarized as: DISPLAYFORM1 where D(f) in this case indicates a deconvolutional block doing ×2 upsampling (using transposed convolutions, BatchNorm and ReLU) with f input channels. We trained all networks using Adam, with learning rate of 0.002, β 1 = 0.5 and β 2 = 0.999. We use a batch size of 128. TAB3 shows a comparison chart between the proposed and related methods. We applied the procedure described in Section 3 for progressive unsupervised learning of disentangled representations to the Street View House Numbers (SVHN) dataset BID23. The SVHN dataset contains 73,257 32×32 RGB images for training. For this experiment, the encoder architecture is: DISPLAYFORM0 Here, C(n) represents a convolutional block with n 3 × 3 filters and zero padding, ReLU activation function and average pooling. The decoder architecture is: DISPLAYFORM1 Here, D(n) represents a ×2 upconvolution block with n 4 × 4 filters and zero padding, ReLU activation function and average pooling. The latent code was started with k = 2 and d = 0 and progressively grown to k = 2, d = 16. Each stage was trained for 25 epochs. We used λ y = 0.025, λ dif f = 0.01. We used Adam with a learning rate of 3e − 4, a batch size of 128 and weight decay coefficient 1e − 6.The first teacher model (k = 2, d = 0) achieves a reconstruction MSE of 1.94e −2 and the final student model (k = 2, d = 16) a reconstruction MSE of 4.06e−3. FIG6 shows the two principal factors of variation learned by the first teacher model (corresponding to k = 2, d = 0). Contrary to the MNIST example of Section 3, here the two main factors of variation are not related to the digit class, but to the shading of the digit. The progressive growth of the latent code is carried on from d = 0 to d = 16. The following factors of variation are related to lighting, contrast and color (see FIG0). In this case, the unsupervised progressive method discovered factors that appear related to the digit class at the 9 th and 10 th steps of the progression. FIG0 shows how the digit class can be controlled by the student with d = 16 by varying these factors. Because of the Jacobian supervision, the student is able to control the digit class while maintaining the style of the digit. Finally, in FIG0 we show that the student also maintains control of the two main factors of variation discovered by the first teacher. Figure 10: Third, fourth and fifth factors of variation automatically discovered on SVHN. Each row corresponds to one factor and each column corresponds to one sample. Each factor is varied while maintaining the rest of the latent units fixed. FIG0: Factors of variation related to the center digit class appear to emerge on the 9th and 10th discovered factor during the unsupervised progressive procedure described in Section 3. Here we show how the student model with Jacobian supervision and d = 16 can be used to manipulate the digit class while approximately maintaining the style of the digit, by varying the latent units corresponding to those factors. The bottom row shows the original images (reconstructed by the autoencoder). All images are from the test set and were not seen during training. FIG0: Result of the student with Jacobian supervision (d = 16) when varying the two factors learned by the teacher FIG6, for four different images (whose reconstruction is shown on the bottom row). The conditioning related to shading is maintained. (Left to right: darker to lighter. Top to bottom: light color on the left to light color on the right.) All images are from the test set and were not seen during training. | A method for learning image representations that are good for both disentangling factors of variation and obtaining faithful reconstructions. | 1,297 | scitldr |
We propose a new notion of'non-linearity' of a network layer with respect to an input batch that is based on its proximity to a linear system, which is reflected in the non-negative rank of the activation matrix. We measure this non-linearity by applying non-negative factorization to the activation matrix. Considering batches of similar samples, we find that high non-linearity in deep layers is indicative of memorization. Furthermore, by applying our approach layer-by-layer, we find that the mechanism for memorization consists of distinct phases. We perform experiments on fully-connected and convolutional neural networks trained on several image and audio datasets. Our demonstrate that as an indicator for memorization, our technique can be used to perform early stopping. A fundamental challenge in machine learning is balancing the bias-variance tradeoff, where overly simple learning models underfit the data (suboptimal performance on the training data) and overly complex models are expected to overfit or memorize the data (perfect training set performance, but suboptimal test set performance). The latter direction of this tradeoff has come into question with the observation that deep neural networks do not memorize their training data despite having sufficient capacity to do so BID38, the explanation of which is a matter of much interest. Due to their convenient gradient properties and excellent performance in practice, rectified-linear units (ReLU) have been widely adopted and are now ubiquitous in the field of deep learning. In addition, the relative simplicity of this function (max(·, 0)) makes the analysis of ReLU networks more straight-forward than networks with other activation functions. We propose a new notion of'non-linearity' of a ReLU layer with respect to an input batch. We show that networks that generalize well have deep layers that are approximately linear with respect to batches of similar inputs. In contrast, networks that memorize their training data are highly nonlinear with respect to similar inputs, even in deep layers. Our method is based on the fact that the main source of non-linearity in ReLU networks is the threshold at zero. This thresholding determines the support of the ing activation matrix, which plays an important role in the analysis of non-negative matrices. As we discuss in Section 3, the non-negative rank of a matrix is constrained by the shape of the support, and is therefore indicative of the degree of non-linearity in a ReLU activation matrix with respect to the input. Although computing the non-negative rank is NP-hard , we can restrict it with approximate non-negative matrix factorization (NMF) BID20. Consequently, we propose to estimate the'non-linearity' of a ReLU layer with respect to an input batch by performing NMF on a grid over the approximation rank k, and measuring the impact on network performance. This procedure can be seen as measuring the robustness of a neural network to increasing compression of its activations. We therefore compare our NMF-based approach to two additional dimensionality reduction techniques, namely principal component analysis (PCA) and random ablations. We informally define memorization as the implicit learning of a rule that associates a specific sample (i.e., with index i) to a particular label (e.g., with index j). Such a rule does not benefit the network in terms of improving its performance on new data. We show that our NMF-based approach is extremely sensitive to memorization in neural networks. We report for a variety of neural network architectures trained on several image and audio datasets. We conduct a layer-by-layer analysis and our reveal interesting details on the internal mechanism of memorization in neural networks. Finally, as an indicator for memorization, we use our proposed measure to perform early stopping. The study of factors involved in the bias-variance tradeoff in learning models goes back several decades. Classical in statistical learning consider properties of learning models such as the VC-dimension BID33 and Rademacher complexity BID4. These properties give generalization bounds in terms of the capacity model to (over)fit data. When considering the vast capacity of deep neural networks, such bounds become irrelevant and fail to explain their ability to generalize well in practice BID38 BID5.More direct analyses have been done with respect to a specific setting of model parameters. For instance, BID3 showed that the number of weights in a network is less important compared to their scalar value (e.g. 2 -norm), and more recently BID5 presented a bound for deep neural networks based on the product of spectral norms of the network's weight matrices. BID0 showed that memorizing networks contain more information in their weights. Methods to explain generalization have been proposed that examine a network's robustness to perturbations BID13 BID6 BID14 BID26 BID22. These methods propose the notion of flatness of minima on the loss surface, assuming that perturbing the parameters without dramatically changing performance is an indicator of the generalization of a network. However, any reversible transformation, such as simple scaling, can arbitrarily manipulate the local flatness without affecting generalization BID8. The procedure we propose can be viewed as applying perturbations, albeit to activations and not parameters, and must address this concern. The perturbations we apply to activations account for magnitude, since they depend on a change of rank or non-negative rank of the activation matrix, a property which is robust to rescaling and similar reversible transformations. In contrast to the methods described thus far, which deal exclusively with the parameters of the model, methods have been developed that account for the role of the data distribution. BID23 proposed to use the Fisher-Rao norm, which uses the geometry of the data distribution to weigh the contribution of different model parameters. The empirical studies of BID24 and BID27 explore robustness to specific types of noise. The former uses Gaussian noise and masking noise injected into hidden activations, while the latter interpolates between input samples to study network behavior on and off the data manifold. In both cases, robustness to noise proved a reliable indicator for good generalization. Additionally, BID1 derive generalization bounds in terms of robustness to noise. Our experimental setup is reminiscent of BID24 in that both methods apply a form of compression to hidden activations and test for robustness to this type of noise. Specifically, they set random axis-aligned directions in feature space to zero which can be viewed as a crude form of dimensionality reduction, i.e., by simply removing canonical dimensions. In our experiments we refer to this method as random ablations. Our show that robustness to NMF compression is much more correlated with low memorization/high generalization than robustness to random ablations. BID2 have also studied various empirical aspect of memorization. As a dimensionality reduction technique, NMF has gained popularity due to its producing meaningful factorizations that lend themselves to qualitative interpretation across various domains such as document clustering BID37, audio source separation BID11, and face recognition BID12. In the context of deep convolutional neural networks, BID7 applied NMF to the activations of an image classifier and showed that the gives a decomposition into semantic parts, which benefits from the transformation invariance learned by the neural network. Consider a ReLU layer of a neural network, parameterized by a weight matrix W ∈ R m×q. For a batch of n inputs X ∈ R n×m, we compute the layer activation matrix A as follows: DISPLAYFORM0 where R + are the non-negative reals. We omit the bias term for notational convenience. The processing of a single input x by a ReLU network is equivalent to sampling a sub-network that is linear with respect to the sample BID35. This could be accomplished by simply setting to zero the columns of each W whose dot product with the input is negative (and would thus be set to zero by ReLU), and then removing the thresholding. Extending this notion to a batch of several input samples to a ReLU layer, suppose the samples are sufficiently close to each other such that they all share the same ReLU mask m ∈ {0, 1} q. In this case, we may say that the layer is linear with respect to its input batch. This is because, for the entire batch, instead of using ReLU, we could zero out a subset of columns and obtain a linear system, i.e., A = XW diag(m).For an activation matrix A (Equation 1), we consider the support M = supp(A), which we describe as a binary 0/1 matrix where M i,j = 1 where A i,j > 0. Because A is a ReLU activation matrix, the structure of M is mainly determined by the thresholding at zero.2 Because thresholding is the main source of non-linearity in a ReLU network, the support takes on a special meaning in this case. We want to characterize how close to being linear a layer is with respect to its input X by examining the support of the ing activations M. If all the rows of M are identical to a unique vector m, we can say the layer is completely linear with respect to X. In general, the'simpler' the support M, the closer to linearity the layer. One measure that captures this idea is the rectangle cover number of a matrix, rc(M), an important quantity in the study of communication complexity BID17. Also known as the Boolean rank, rc(M) is the smallest number r for which there exist binary matrices U B ∈ {0, 1} n×r, V B ∈ {0, 1} r×q such that their Boolean matrix multiplication satisfies M = U B V B. As a complexity measure for ReLU activations, rc(M) = 1 means the layer is linear with respect to its input, and higher values rc(M) imply increasing non-linearity. This is visualized in FIG0.Intuitively, imagine having to fit a layer with'ReLU switches', each of which controls a subset of weight matrix columns. In the linear case, one switch would suffice to describe the data. In the most non-linear case, we would require a switch for every column, which is also the maximal value of rc(M).Because computing the rectangle cover number rc(M) is complex, several approximations and bounds to it have been studied BID9. For the support of a non-negative matrix, a well-known upper-bound is: DISPLAYFORM0 where rank + (A) is the non-negative rank of A BID10 that is defined as the smallest number k such that there exist non-negative matrices DISPLAYFORM1 Similar to the rectangle cover number, the non-negative rank is hard-constrained by the combinatorial arrangement of supp(A), but additionally accounts for the actual value in the non-zero entries of A.While computing rank + (A) is not easier than computing rc(supp(A)), we can restrict it by performing approximate non-negative matrix factorization (NMF). For a given non-negative rank constraint k, NMF solves for: DISPLAYFORM0 with U +, V + as defined above. The U + V + =Ã k ≈ A is the closest matrix to A under the Frobenius norm that has rank + at most k. Consequently, we propose to estimate the'linearity' of a ReLU layer with respect to a batch of similar inputs by performing NMF on a grid over the non-negative rank k, and measuring the impact on network performance by observing the change in the prediction (output layer) as we change k. This procedure also addresses the fact that in practice network activations tend to be noisy, whereas supp(A) is not robust to noise, i.e., A i,j = > 0 → M i,j = 1 even for very small.Concretely, if we let A i be the activation matrix at layer i, during the forward pass, we replace the feature activations of one or several layers with their rank k NMF approximations: DISPLAYFORM1 For convolutional networks, we first reshape the tensor of feature maps from n × q × h × w to (n · h · w) × q, i.e., we flatten the batch (n) and spatial dimensions (h, w) to form a matrix with q columns, where q is the number of channels in that layer. We then inversely reshape the approximated features to continue forward propagation through the network. We now characterize the input batch, with respect to which we would like to measure layer linearity. Informally, the goal of training is to cluster together input samples that have similar (or identical) output labels,while separating them from samples of other labels. In the context of classification then, we expect therefore that from a certain depth and onward, samples of the same class will have similar activations, and thus a simpler support. In other words, while a network may exploit flexible non-linear structure to separate different classes, we expect that with respect to a single class, deep layers are approximately linear.. We analyze each layer by applying NMF compression to its activation matrix with increasing rank k, while observing the impact on classification performance. In (a) and (b) we show the k vs. accuracy curves at an layers of different depth. We can immediately see that in deep layers, networks with high memorization are significantly less robust to NMF compression, indicating higher degrees of non-linearity. Furthermore, networks trained on fully randomized labels (p = 1) behave differently than networks with partial or no randomization. By summarizing each curve in (a) and (b) by its area under the curve (AuC), we show in (c) a birds-eye view over all layers. All networks with p < 1 pass through distinct phases consisting of a feature extraction phase until conv3_1, followed by a memorization phase until conv4_2, followed by a final clustering phase. Interestingly, the case p = 1 shifts the process into earlier layers, explaining why layer-by-layer it appears as an outlier. When single-class batches are not approximately linear even in deep layers, i.e., activations are not clustered within a few linear regions, we empirically show in the next section that this behavior is indicative of memorization. We start by studying networks that have been forced into different levels of memorization due to label randomization applied to their training set BID38. The level of induced memorization is controlled by setting a probability p for a training label to be randomized, i.e., p = 0 is the unmodified dataset and p = 1 gives fully random labels. Note that the capacity of these networks is sufficiently large such that the training accuracy is 1 in all cases, regardless of the value of p. As such, we use batches of training data and observe how accuracy drop from 1 to constant prediction as we increase the level of compression. In all experiments, sampling single-class batches is done with respect to the label used for training (i.e., the random label if p > 0). We sample batches stochastically (up to the label), have found all methods discussed below to be robust to the batch size (e.g., 20-100). In all our experiments we set the batch size to 50.We perform experimental evaluations on several image datasets, namely CIFAR-10 (BID18), Fashion-MNIST BID36, SVHN BID25 ImageNet , as well as on the Urban Sounds audio classification dataset BID30. We use a fully-connected network for Fashion-MNIST and various CNN architectures for the others, which we describe in more detail the appendix. We start by analyzing the layers of an 11-layer CNN trained on CIFAR-10. We sampled 10 batches (one batch per class) of 50 images, and compressed the activation matrix at each layer individually down to various values of the non-negative rank. We then measured classification accuracy of the prediction. In this analysis we report average for 60 neural networks, ten networks (with different random initializations) trained per randomization level p. In FIG1 (a) and (b) we show k vs. accuracy curves of networks trained with increasing levels of label randomization, at an early layer (conv2_1) and a deep layer (conv4_1) respectively. We can immediately see that networks trained on fully randomized labels (p = 1) behave differently than networks with partial or no randomization. Furthermore, note that in deep layers, memorizing networks are significantly less robust to NMF compression, i.e., their activations posses a high nonnegative rank, which indicates high non-linearity with respect to the input, as discussed in Section 3.2. We can characterize each curve in (a) and (b) with a single number, its area under the curve (AuC). This allows us in FIG1 (c) to generate a single figure for all layers. Networks with p < 1 display a similar feed-forward trend up until layer conv3_1. Since these networks differ from each other in no way other than the level of label randomization on the training data, we hypothesize this to be a generic feature extraction phase common to all of them. In the next phase, until conv4_2, we see a big difference between networks, such that more memorization (higher p) is correlated with lower AuC, i.e., higher non-negative rank and hence non-linearity of those layers with respect to single-class batches. We therefore localize memorization to these layers. Lastly, the phase only of conv4_3 is where samples of the same class are clustered together, right before the final 10-dimensional classification layer (which is not shown). This final phase is in accordance with the premise that regardless of randomization level p, all of these networks achieve perfect training set accuracy. Interestingly, setting p = 1 shifts the process to earlier layers, explaining why layer-bylayer this case appears as an outlier. We compare different compression techniques with regards to detecting memorization. Given our choice of solving NMF under the Frobenius norm (Equation 3), a natural method to compare against is principal component analysis (PCA), which optimizes under the same norm but without the nonnegativity constraint on its factors. We also consider random ablations, i.e., setting a random subset of columns in the activation matrix to zero, since this technique has been used previously to detect memorization BID24.Rather than choosing a single layer, we sequentially apply compression to several layers. We target the final convolutional blocks of our CNNs, all of which contain three layers, each of which consists of 512 channels. In fully-connected networks, we applied compression to all layers. In FIG2 we give for the CIFAR-10 dataset, which confirm that NMF compression is indeed more sensitive to memorization, due to the properties of the non-negative rank discussed in Section 3.2. PCA, which is less constrained, is more'efficient' at compressing the activations, but is in turn less discriminative with respect to the level of memorization. Finally, we confirm that robustness to random ablations correlates with less memorization, however less so than NMF. It should be noted that NMF does show more variance than the other two methods, and incurs a higher computational cost, as discussed in section 6.6 in the appendix. In FIG3 we show additional for single-class NMF on three additional datasets and network architectures (described in the appendix), including a fully-connected network for Fashion-MNIST. The in (d) and (e), of applying PCA and NMF to multi-class batches, show that such batches produce activations with higher rank or non-negative rank compared to single-class batches. This is a of the network trying to separate samples of different labels. We have shown for networks forced into memorization due to label randomization. In this section we show our technique is useful for predicting good generalization in a more realistic setting, without artificial noise. In addition to the experiments below, we refer the reader to Section 6.3 in the appendix, where predict per-class generalization of a pre-trained VGG-19 network on a ImageNet classes. We trained 96 CNN classifiers on CIFAR-10, over a grid of hyper-parameter values for the batch size, weight decay and optimization algorithm, SGD vs. ADAM BID16. Following the same procedure as above, for each of the three methods, NMF, PCA, or random ablations, we computed the k vs. accuracy curves for each network, targeting its final convolutional block. In Figure 5 we compare the area under the curve (AuC) of each curve with the average generalization error on the test set. While all three methods show correlation with generalization error, NMF is most correlated with a Pearson correlation of -0.82, followed by PCA with -.064 and random ablation with -0.61. We test whether our method can detect memorization during training. Doing so would allow us to perform early stopping, i.e., stop training as memorization begins to decrease generalization. We trained CNNs on CIFAR-10 with the original labels. Each network was trained for 10K batches with a batch size of 100. We recorded the test set error every 250 batches, and tested the non-linearity of the deepest three convolutional layers using our NMF-based approach with a coarse grid on k. As before, we compute the area under each k vs. accuracy curve as in FIG2. Finally, we also computed the area under the curve produced by random ablations. Results of two instances are shown in Figure 6 (a) and (b). In these figures we compare the test loss against our single-class NMF method and random ablations. We smooth the plots using a radius of two epochs to reduce the noise. The matching-color dashed lines mark the local minima of the test loss in as well as the location of the first local maxima of the NMF and random ablation AuC curves after smoothing has been applied. We notice that the test loss minima align almost precisely with the maximum NMF AuC. We further confirm this behavior in Figure 6 (c), where we compare the stopping times of NMF and the random ablations method against the best test loss over 10 different runs. We have introduced a notion of a ReLU layer's non-linearity with respect to an input batch, which is based on its proximity to a linear system. We measure this property indirectly via NMF applied to deep activations of single-class batches. While more analysis is required before definite guarantees could be given, we find that our approach is successful in detecting memorization and generalization across a variety of neural network architectures and datasets. The exact architectures we used for each dataset are given in Table 1. We denote a linear or convolutional layer followed by a ReLU as Linear + and Conv +, respectively. It is interesting to study the impact of ablating the activation in the directions found by NMF and PCA by forward propagating the residual, i.e., DISPLAYFORM0 This is interesting because in the case of PCA, for instance, the top k directions are those that capture most of the variance in the activation matrix, and presumably the k directions found by NMF are of similar importance. This is not true for the random ablations, where the ablated directions are of no special importance. In Figure 7 we see that networks with no induced memorization that are most vulnerable to ablation of NMF and PCA direction. In other words, while non-memorizing networks are more robust to random ablations, they are not robust to ablations of specific important directions. This is in contrast to the interpretation of Morcos et al. FORMULA0 The VGG-19 model BID31, trained on ImageNet , is known for its good generalization ability, as evident by its widespread use as a general feature extractor. We use a pre-trained model here as an example of a well-generalizing network and analyze it with our method. We apply NMF compression to the three deepest convolutional layer, on activations of both singleclass batches and multi-class batches. We select 50 random classes from ImageNet and gather batches of 50 training samples from each class. In Figure 8, shown in blue, NMF applied to single-class batches, has a denoising effect and improves over the baseline accuracy of the batch, shown as a dashed line. As the constraint on k is relaxed, that accuracy drops back to its baseline level. We contrast this behavior with the one shown in green when using multi-class batches. Here, only when k is large do we regain baseline accuracy, and sensitivity to ablation is similarly diminished. This is due to the critical role non-linearity plays in separating the different classes. Ablating the NMF directions dramatically reduces classification accuracy. Finally, in Figure 8 (c) we show there is a significant per-class correlation (Pearson r = 0.78) between NMF AuC and accuracy on test accuracy on batches from the ImageNet test set. In this experiment we use NMF-based early stopping to improve neural network accuracy in the context of few-shot learning, i.e., learning with very few samples per output label. We choose this setting since it is representative of data scarcity, where one would like to use all available data for training, rather than holding some out for validation. We demonstrate this on the case of MNIST BID19 digits with only 2 samples per class, which in a training set of 20 samples. For early stopping with NMF, we set a very simple grid over k, a single point at k = 1. Thau NMf AuC thus simply becomes the training accuracy when compressed with NMF k = 1.In FIG6 it is evident that training set accuracy with NMF k = 1 shows similar gradients as the test set accuracy. Based on this observation, as before, we extract the first peak of the smoothed NMF curve, and stop there. Results are shown in Table 2 for 10 runs with randomly sampled training sets. We compare accuracy at our early stopping point to the best test set accuracy detected throughout the run (Best case), the average test set accuracy where the training set accuracy is 1 (Average case), and similarly the lowest test set accuracy where the set accuracy is 1 (Worst case).: NMF early stopping for few-shot learning of MNIST digits with only 20 samples. By observing the training set accuracy under NMF k = 1 compression, we are able to correctly guess the gradient of the test set accuracy. We use this to perform early stopping with the simple heuristic of stopping at the first peak, which leads to improved accuracy as shown in Tabel 2. Table 2: NMF early stopping for few-shot learning of MNIST digits with only 20 samples. By observing the training set accuracy under NMF k = 1 compression, we are able to correctly guess the gradient of the test set accuracy. Early stopping at the first peak consistently improves accuracy. In the first row of Table 2 we see that on average our method significantly improves over not using early stopping, and is on par with a recently proposed method specifically deigned for few-short learning. Furthermore, in the last two rows we show that per sampled training set, early stopping consistently improves accuracy. In FIG1 we show for every layer the area under the curve (AuC) of its k vs. classification accuracy curve. However in addition to the accuracy, the NMF reconstruction itself is also a quantity of interest. There main difficulty involved with interpreting the NMF error is scale. The error depends on the magnitude of the activations, which varies across networks, layers and even channels. In FIG0 Layer-by-layer view of (a) raw and (b) normalized NMF reconstruction errors, which NMF is trying to minimize. As we again notice the outlier behavior of networks trained with label randomization p = 1, in (c) we localize the transition between the two regimes to around p = 0.9. approximate with depth, with an interesting interaction between the memorization level and depth. The error in absolute terms echo the accuracy curve, with p = 1 again presenting outlier behavior. Returning to accuracy measurement as in 2 (c), sampling p more densely reveals in FIG0 (c) that a phase shift occurs around p = 0.9, where networks "shift" their memorization to earlier layers. Applying NMF compression to large matrices naturally incurs certain overhead. We find, however, that our implementation of the multiplicative update algorithm BID21 runs in reasonable time thanks to GPU acceleration. We report timing for typical batch used for VGG-19, i.e., 100 samples of 224×224 color images. At layer conv5_4 activations form a tensor of size 100 × 14 × 14, which we flatten to a matrix of size 19600 × 512. In FIG0 we show the timing curve for this batch as we increase k, using an NVIDIA Titan X card. As can be seen, at k = 500 processing of the batch to convergence requires 197 milliseconds on average. The final runtime depends on the number of classes sampled, and the granularity of the grid over k. We found our measurements to be robust to heavy subsampling of both. | We use the non-negative rank of ReLU activation matrices as a complexity measure and show it (negatively) correlates with good generalization. | 1,298 | scitldr |
Deep neural networks have achieved state-of-the-art performance in various fields, but they have to be scaled down to be used for real-world applications. As a means to reduce the size of a neural network while preserving its performance, knowledge transfer has brought a lot of attention. One popular method of knowledge transfer is knowledge distillation (KD), where softened outputs of a pre-trained teacher network help train student networks. Since KD, other transfer methods have been proposed, and they mainly focus on loss functions, activations of hidden layers, or additional modules to transfer knowledge well from teacher networks to student networks. In this work, we focus on the structure of a teacher network to get the effect of multiple teacher networks without additional resources. We propose changing the structure of a teacher network to have stochastic blocks and skip connections. In doing so, a teacher network becomes the aggregate of a huge number of paths. In the training phase, each sub-network is generated by dropping stochastic blocks randomly and used as a teacher network. This allows training the student network with multiple teacher networks and further enhances the student network on the same resources in a single teacher network. We verify that the proposed structure brings further improvement to student networks on benchmark datasets. Deep neural networks (DNNs) have achieved state-of-theart performances on complex tasks like computer vision (He et al. 2016), language modeling (Jozefowicz et al. 2016), and machine translation. Moreover, they surpass human ability in several fields including image classification (He et al. 2016), the go game, voice generation (Oord et al. 2016), and so on. Despite their superior performance, it is difficult to use DNN-based models because of limited memory and computational resources in the embedded systems. To deal with this problem, many studies have been done to make DNNs smaller but efficient to be applicable in resource limited cases. One of them is knowledge transfer (KT), which train a smaller network with the information of large model's information. Knowledge The primary goal of this paper is to make a single teacher network to behave as multiple teacher networks. Since multiple teacher networks provide various outputs on a given input, they can provide more extensive knowledge than a single teacher network does. It has been shown that student networks improve further with multiple teacher networks which are used as an ensemble or separately (Hinton, Vinyals, and Dean 2015; You et al. 2017; Zhang et al. 2018). However, using multiple teacher networks is a resource burden and delays the training process. In this work, we propose to add stochastic blocks and skip connections to a teacher network. In doing so, we can get the effect of multiple teacher networks in the same resource of single teacher network. A stochastic block is a block that falls with a fixed probability in the training phase and weighted by its survival probability in the inference phase. Skip connections make huge number of paths in the network and function as memory which link the information of previous parts and later parts even if stochastic blocks drop. In the training phase, different sub-networks are generated ing from stochastic drop in the teacher network for each batch. The sub-networks still have reliable performances since there still exist valid paths. Each sub-network becomes a teacher network for each batch, so the student network is trained with multiple teacher networks in the entire training phase. Figure 1 is example of sub-networks generated by dropping one block each from a network with the proposed structure. The networks consists of 3 blocks and f i, Id represents the ith block of the network (i ∈ 1, 2, 3) and an identity block generated by a skip connection respectively. Red arrows in the figure mean that the outputs of the blocks are 0. In Figure 1, even if one block drops, each subnetwork still has 4 valid paths of 8 total paths. We observe that: (i) multiple teacher networks are generated from a single teacher network with no more resources; (ii) generated networks provide different knowledge to a student network; (iii) the performances of student networks improve with the help of a teacher network of the proposed structure. We succeeded in training the student network to perform better than the ones with the same architecture trained by the knowledge transfer methods (KD) (Hinton, Vinyals, and Dean 2015), attention transfer (AT) (Zagoruyko and Komodakis 2016a), and mutual learning (ML) (Zhang et al. 2018) ) over CIFAR-100 (Krizhevsky, Hinton, and others 2009) and tinyimageNet (Russakovsky et al. 2015) datasets. The rest of this paper is organized as follows. First, we review recent studies related to our work. Then, we demonstrate the proposed scheme with details. After this, we present experiments and discuss the . Finally, summary and concluding remarks are given in the . Knowledge transfer of neural networks has been proposed over a decade ago (Buciluǎ, Caruana, and Niculescu-Mizil 2006) but has recently received much attention with some intuitions and a generalized approach (Hinton, Vinyals, and Dean 2015). There, softened outputs of a teacher network are used to transfer knowledge to a student network. They demonstrate that the softened outputs of a teacher network provide a student network with additional supervision and prevent the student network from overfitting. Later, distillation has been applied in transferring knowledge from powerful and easy-to-train networks to small but hard-to-train networks (Romero et al. 2014). Romero et al. suggest intermediate outputs of teacher networks to be used as hints for student networks. An attention-based distillation method makes use of attention maps of teacher networks which are made from feature maps (Zagoruyko and Komodakis 2016a). To transfer knowledge while avoiding direct mimicry, (Yim et al. 2017) exploits flows calculated by Gram matrix of feature maps from two layers of a teacher network, then a student network is trained to mimic the flows of the teacher network. Recently, mutual learning (Zhang et al. 2018) suggests a new paradigm of bidirectional knowledge transfer. All networks in mutual learning are not fixed and exchange knowledge unlike conventional teacher-student paradigm where teacher networks are fixed and student networks only get knowledge. Student networks can be improved further with the help of multiple teacher networks (You et al. 2017). The dissimilarity between teacher networks provide extensive knowledge to a student network and help to further enhance the student network. Similarly, (Zhang et al. 2018) shows that a neural network can be further improved with the help of multiple neural networks for vision tasks such as image classification and person re-identification. Also, (Chebotar and Waters 2016) shows that multiple teacher networks are more helpful than a single teacher in speech recognition. Most of the distillation methods improve the performance of student networks with multiple teacher networks, but deploying them is demanding due to additional resources. Instead of directly using multiple teacher networks, (Sau and Balasubramanian 2016) suggest perturbing the outputs of a teacher network to get the effect of multiple teacher networks. However, perturbing outputs with noise can be problematic as it changes the values of the outputs so that corrupted knowledge of the teacher network is transferred. In our proposed structure, multiple networks of valid paths are generated (see Figure 1) so that reliable and various outputs are transferred to the student network and provide flexible knowledge. In reinforcement learning, encouraging the policy to have an output distribution with high entropy has been used to improve exploration. This prevents the policy from converging early and leads to improved performance (Williams and Peng 1991; Mnih et al. 2016). Also, penalizing confident outputs (Pereyra et al. 2017) and smoothing label (Szegedy et al. 2016) are proved to help the training of a deep neural network. Regularizing the high confident outputs helps training the deep neural network since it prevents over-fitting of the network and a big difference between values of outputs so that the adaptivity of the network increases. In the same vein, high confident outputs of a teacher network are challenging for student networks to learn. In ML (Zhang et al. 2018), it has been shown that the ensemble of multiple networks is a worse teacher than the individual networks. Individual networks provide higher entropy outputs than the ensemble, so that the salient secondary values in the outputs can be more helpful generalizing student networks. To get the effect of multiple teacher networks from a single teacher network, we propose to add stochastic blocks and skip connections to the teacher network. In this section, first, we explain in detail how to change the structure of the teacher network to make multiple sub-networks. Then, we demonstrate that multiple sub-networks can be used as multiple teacher networks. For ResNet (He et al. 2016) and Wide ResNet (Zagoruyko and Komodakis 2016b), they consist of blocks and contain skip connections. For MobileNet (Howard et al. 2017), we grouped a depth-wise convolution and a point-wise convolution as one block and add skip connections from each input of the block to the corresponding output. Skip connections in residual networks prevent vanishing gradient problem, so that deeper networks can be trained well. In other respect, skip connections let a residual network to be viewed as an ensemble of multiple paths of different lengths (Veit, Wilber, and Belongie 2016). When we set ith block of a residual network as f i, then the output (o i+1) of the (i + 1) th block is expressed as follows. Since there are two paths from a previous output to the next output, if there are n blocks in the network, 2 n paths exist from the input layer to the output layer. It has been shown that, to some extent, changing the structure of a residual network do not harm the performance much (Veit, Wilber, and Belongie 2016). Especially, deleting blocks of residual networks does not harm the performance much. This is because there still exist valid paths even if some blocks of a residual network drop (if k blocks are dropped from n blocks, valid 2 n−k paths still exist). Therefore, when a neural network consists of blocks and contains skip connections, multiple neural networks with adequate performances are generated by dropping blocks randomly. To implement this idea in training phase, we set blocks of neural networks to be stochastic as in. Since initial blocks extract low-level features that will be used by later blocks, we choose linear decay mode to set the survival probability of each block. P end denotes the survival probability of the last block and p i survival denotes that of the ith block expressed as where N is the number of total blocks and i = 0 ∈ {0, 1, ..., N − 1}. P end implies a trade-off between quantity and quality of sub-networks. If p end is high, each generated sub-network will have longer length, so the performance will be better than sub-networks with shorter lengths. However, high p end generates less sub-networks. In the opposite case, more subnetworks are generated but each performance can be a bit lower. Optimal p end seems different for teacher and student pairs. We tried p end ranging [0.5, 0.9] with interval 0.1 and choose p end that improves student networks most for each teacher and student pair. One might wonder if sub-networks can play the role of teacher networks and provide independent knowledge so that student networks get sufficient knowledge. For a residual network of 110 layers, it has been shown that subnetworks generated by dropping some blocks show competent performance and are independent each other (Veit, Wilber, and Belongie 2016). Convolutional neural networks (CNNs) based on residual networks will probably have the same characteristic, however, other networks like mobile network are not guaranteed to generate reliable sub-networks with the proposed structure. To verify the efficacy, we show the accuracy when each block is dropped from pre-trained networks for three kinds of networks. Figure 2 is the accuracy when each block drops from residual network of 32 layers, mobile network, and wide residual network 28-10. In Figure 2, sto and basic represents networks with the proposed structure and original networks respectively. Sto networks are stronger against dropping blocks than basic networks, so we pre-train teacher networks with the proposed structure. It seems that dropping initial blocks of mobile network and 4th block of wide resnet 28-10 degrades the performance significantly. To observe the impact of such blocks that are fatal to drop, we compare cases where the blocks drop or does not drop like other blocks in ablation study. The performances of sub-networks lag behind the original network. However, they sometimes predict correctly while the original network does not. Also, they generate outputs with high entropy which are easier for student networks to learn (see Figure 3). It is known that regularizing a neural network to be less confident improves performance (Pereyra et al. 2017). Similar are also observed in deep mutual learning. In (Zhang et al. 2018), they show that using an ensemble of n networks as a teacher is less helpful than using n individual networks as n teachers. This is because the ensemble makes the outputs have low entropy, which means that the secondary values of outputs becomes small. The secondary values are salient cues in transferring knowledge as it provides important information like relations between classes. Dropping blocks of the teacher network is analogous to using individual networks instead of the ensemble of them. Thus, sub-networks can provide student networks with rich knowledge. Also, knowledge of the original network is fully utilized as all the blocks of the network are used in generating sub-networks in the end. Generated sub-networks share considerable parts of the original network but provide different knowledge to a student network. The degree of the difference is similar to that of individual neural networks. We confirm the similarity with resnet 32 and attach the related table in the appendix. We apply the proposed method to other distillation techniques, KD, AT, and ML. To apply to KD and AT, the teacher network is changed to have skip connections and stochastic blocks and other settings are not changed. In mutual learning, the notions of teacher and student vanish since both networks give and take knowledge each other. But for convenience, we denote a network with large capacity as a teacher network and the other as a student network. The teacher network is changed into proposed structure as in KD and AT. To apply the proposed structure to mutual learning, both networks should be pre-trained since teacher networks are not fixed. If the networks are not pre-trained, they cannot be improved because of the stochastic property of the teacher network. Let's assume a situation when both networks are not pre-trained. At the beginning of training process, subnetworks of a teacher network are randomized. In mutual training, each sub-network and a student network exchange knowledge. However, since a different sub-network is used each time, for many epochs, the student gets random knowledge from randomized sub-networks so that it does not improve. Also, sub-networks are not optimized due to the disturbing knowledge from the student network. In our simulation, multiple sub-networks are not used at the same time but one sub-network generated by stochastic drop is used as a teacher network for each batch. We evaluate the proposed method with two datasets -CIFAR-100 (Krizhevsky, Hinton, and others 2009) and tiny imagenet (Russakovsky et al. 2015). CIFAR-100 dataset consists of 32 × 32 RGB color images drawn from 100 classes, which are split into 50, 000 train and 10, 000 test images. Tiny imagenet dataset is a down-sampled version of ImageNet dataset. It consists of 64 × 64 RGB color images drawn from 200 classes, which are split into 100, 000 train and 10, 000 test images. For CIFAR-100, we normalize each image and augment the train images. The data augmentation includes horizontal flips and random crops from image padded by 4 pixels on each side, filling missing pixels with reflections of original image. Each network is trained for 200 epochs with batch size of 128 and learning rate which is decreased at every 60 epochs. For tiny imagenet, we simulate with the pure dataset without augmentation. Each network is trained for 100 epochs with batch size of 128 and learning rate which is decreased at every 40 epochs. Stochastic gradient descent optimizer with momentum of 0.9 is used for the whole simulation. The initial learning rate is 0.01 for ML case and 0.1 for the other cases. 4 CNNs are used -wrn, resnet, mobilenet, and vgg net (Simonyan and Zisserman 2014). CNNs are modified to the proposed structure when they are used as teacher networks. All the in the simulation are averaged over 3 times. Here, we present simulation of knowledge transfer methods on CIFAR-100. Table 1 is the simulation of KD and KD with the proposed structure. As you can see in Table 1, we confirm that the proposed structure further improves performances of student networks on KD. In case of (WRN 28-10, ResNet 32) pair, the accuracy of ResNet 32 trained with the proposed structure improves more than 5% compared to when ResNet 32 is trained the pure WRN 28-10. Table 2 is the simulation of AT and AT with the proposed structure. In AT, attention maps of teacher and student networks should have same spatial size. So, we used residual networks and wide residual networks to fit the spatial size conveniently. Attention maps are made by square sum via channel axis and l2 normalization. The proposed structure show further improvement over the pure AT method. We confirm that the proposed structure improve student networks further with AT method in all the pairs. Table 3 is the simulation of ML and ML with the proposed structure. Only teacher networks are change to the proposed structure, as mentioned in the previous section. And a teacher networks and a student network exchange knowledge each other. The network pairs in Table 3 are same with those of the paper (Zhang et al. 2018). The proposed structure still show further improvement in peer learning paradigm, so both networks are improved further. Here, we present simulation of knowledge transfer methods on tiny imagenet. Table 4, 5 are the simulation of KD, AT, and proposed structure. As the simulation on CIFAR-100, the proposed structure improves student networks generally, but there exist one pair each for KD, AT that the student network is not improved. In Figure 2, when some blocks drop, then, performances of neural networks drop significantly. The blocks are the 4th block of wrn 28-10 and 1st to 6th blocks of mobilenet. We name these blocks significant blocks. Sub-networks generated by dropping significant blocks have low performance so that the networks might not be adequate teacher networks. Hence, we observe if student networks improve further, when sub-networks generated by dropping the other blocks except the significant blocks are used as teacher networks. We use KD and CIFAR-100 dataset for (wrn 28-10, resnet 32) and (mobilenet, resnet 32) pairs. In Table 6, partial means that significant blocks do not drop and full means that all the blocks drop stochastically in training phase. The show that using more teacher networks is more helpful improving a student network even if some of them do not perform well. This is in line with the of ML (Zhang et al. 2018) where larger network still benefits from being trained together with a smaller network. In this work, we propose to change the structure of a teacher network to get the effect of multiple teacher networks in the same resource of one teacher network. In our proposed structure, we obtain multiple teacher networks without additional resource so that compact networks improve further than those trained from conventional transfer methods. The proposed structure can be easily applied to other transfer methods and tasks, e.g. segmentation or object detection. | The goal of this paper is to get the effect of multiple teacher networks by exploiting stochastic blocks and skip connections. | 1,299 | scitldr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.